Open Access. Powered by Scholars. Published by Universities.®

Statistical Methodology Commons

Open Access. Powered by Scholars. Published by Universities.®

1,114 Full-Text Articles 1,507 Authors 603,045 Downloads 125 Institutions

All Articles in Statistical Methodology

Faceted Search

1,114 full-text articles. Page 37 of 38.

Assessing Noninferiority In A Three-Arm Trial Using The Bayesian Approach, Pulak Ghosh, Farouk S. Nathoo, Mithat Gonen, Ram C. Tiwari 2010 University of Victoria

Assessing Noninferiority In A Three-Arm Trial Using The Bayesian Approach, Pulak Ghosh, Farouk S. Nathoo, Mithat Gonen, Ram C. Tiwari

Memorial Sloan-Kettering Cancer Center, Dept. of Epidemiology & Biostatistics Working Paper Series

Non-inferiority trials, which aim to demonstrate that a test product is not worse than a competitor by more than a pre-specified small amount, are of great importance to the pharmaceutical community. As a result, methodology for designing and analyzing such trials is required, and developing new methods for such analysis is an important area of statistical research. The three-arm clinical trial is usually recommended for non-inferiority trials by the Food and Drug Administration (FDA). The three-arm trial consists of a placebo, a reference, and an experimental treatment, and simultaneously tests the superiority of the reference over the placebo along with …


Survival Prediction For Brain Tumor Patients Using Gene Expression Data, Vinicius Bonato 2010 University of Texas Graduate School of Biomedical Sciences at Houston

Survival Prediction For Brain Tumor Patients Using Gene Expression Data, Vinicius Bonato

Dissertations & Theses (Open Access)

Brain tumor is one of the most aggressive types of cancer in humans, with an estimated median survival time of 12 months and only 4% of the patients surviving more than 5 years after disease diagnosis. Until recently, brain tumor prognosis has been based only on clinical information such as tumor grade and patient age, but there are reports indicating that molecular profiling of gliomas can reveal subgroups of patients with distinct survival rates. We hypothesize that coupling molecular profiling of brain tumors with clinical information might improve predictions of patient survival time and, consequently, better guide future treatment decisions. …


Nonparametric Regression With Missing Outcomes Using Weighted Kernel Estimating Equations, Lu Wang, Andrea Rotnitzky, Xihong Lin 2010 University of Michigan

Nonparametric Regression With Missing Outcomes Using Weighted Kernel Estimating Equations, Lu Wang, Andrea Rotnitzky, Xihong Lin

Harvard University Biostatistics Working Paper Series

No abstract provided.


Simple Examples Of Estimating Causal Effects Using Targeted Maximum Likelihood Estimation, Michael Rosenblum, Mark J. van der Laan 2010 Johns Hopkins University

Simple Examples Of Estimating Causal Effects Using Targeted Maximum Likelihood Estimation, Michael Rosenblum, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

We present a brief overview of targeted maximum likelihood for estimating the causal effect of a single time point treatment and of a two time point treatment. We focus on simple examples demonstrating how to apply the methodology developed in (van der Laan and Rubin, 2006; Moore and van der Laan, 2007; van der Laan, 2010a,b). We include R code for the single time point case.


Likelihood Ratio Testing For Admixture Models With Application To Genetic Linkage Analysis, Chong-Zhi Di, Kung-Yee Liang 2010 Fred Hutchinson Cancer Research Center

Likelihood Ratio Testing For Admixture Models With Application To Genetic Linkage Analysis, Chong-Zhi Di, Kung-Yee Liang

Johns Hopkins University, Dept. of Biostatistics Working Papers

We consider likelihood ratio tests (LRT) and their modifications for homogeneity in admixture models. The admixture model is a special case of two component mixture model, where one component is indexed by an unknown parameter while the parameter value for the other component is known. It has been widely used in genetic linkage analysis under heterogeneity, in which the kernel distribution is binomial. For such models, it is long recognized that testing for homogeneity is nonstandard and the LRT statistic does not converge to a conventional 2 distribution. In this paper, we investigate the asymptotic behavior of the LRT for …


On Simulating Univariate And Multivariate Burr Type Iii And Type Xii Distributions, Todd C. Headrick, Mohan D. Pant, Yanyan Sheng 2010 Southern Illinois University Carbondale

On Simulating Univariate And Multivariate Burr Type Iii And Type Xii Distributions, Todd C. Headrick, Mohan D. Pant, Yanyan Sheng

Mohan Dev Pant

This paper describes a method for simulating univariate and multivariate Burr Type III and Type XII distributions with specified correlation matrices. The methodology is based on the derivation of the parametric forms of a pdf and cdf for this family of distributions. The paper shows how shape parameters can be computed for specified values of skew and kurtosis. It is also demonstrated how to compute percentage points and other measures of central tendency such as the mode, median, and trimmed mean. Examples are provided to demonstrate how this Burr family can be used in the context of distribution fitting using …


Graphical Procedures For Evaluating Overall And Subject-Specific Incremental Values From New Predictors With Censored Event Time Data, Hajime Uno, Tianxi Cai, Lu Tian, L. J. Wei 2010 Dana Farber Cancer Institute

Graphical Procedures For Evaluating Overall And Subject-Specific Incremental Values From New Predictors With Censored Event Time Data, Hajime Uno, Tianxi Cai, Lu Tian, L. J. Wei

Harvard University Biostatistics Working Paper Series

No abstract provided.


A New Class Of Dantzig Selectors For Censored Linear Regression Models, Yi Li, Lee Dicker, Sihai Dave Zhao 2010 Harvard University and Dana Farber Cancer Institute

A New Class Of Dantzig Selectors For Censored Linear Regression Models, Yi Li, Lee Dicker, Sihai Dave Zhao

Harvard University Biostatistics Working Paper Series

No abstract provided.


Statistical Power Analysis Using Sas And R, Peter Osmena 2010 California Polytechnic State University, San Luis Obispo

Statistical Power Analysis Using Sas And R, Peter Osmena

Statistics

Statistical power is something that has to be considered when designing an experiment. The power is used to determine the usefulness of the test. In this paper the concept of power and what it is will be discussed. A general ANOVA test and a Chi Squared test will be discussed in greater depth. Computers make these power calculations relatively easy to compute, considering they are used right. SAS and R both have the capabilities to make these calculations for a different variety of tests. Calculating power for a general ANOVA test and a Chi Squared test using these programs are …


Software Internationalization: A Framework Validated Against Industry Requirements For Computer Science And Software Engineering Programs, John Huân Vũ 2010 California Polytechnic State University, San Luis Obispo

Software Internationalization: A Framework Validated Against Industry Requirements For Computer Science And Software Engineering Programs, John Huân Vũ

Master's Theses

View John Huân Vũ's thesis presentation at http://youtu.be/y3bzNmkTr-c.

In 2001, the ACM and IEEE Computing Curriculum stated that it was necessary to address "the need to develop implementation models that are international in scope and could be practiced in universities around the world." With increasing connectivity through the internet, the move towards a global economy and growing use of technology places software internationalization as a more important concern for developers. However, there has been a "clear shortage in terms of numbers of trained persons applying for entry-level positions" in this area. Eric Brechner, Director of Microsoft Development Training, suggested …


Gamma-Ray Spectroscopy: Meteorite Samples And The Search For 98tc, Kristopher L. Merolla 2010 California Polytechnic State University - San Luis Obispo

Gamma-Ray Spectroscopy: Meteorite Samples And The Search For 98tc, Kristopher L. Merolla

Physics

The focus of this project is low-count-level gamma-ray spectroscopy on meteorite samples in search of a particular isotope of Technetium (98Tc), which according to stellar theory, should be present in the universe. The spectral lines for 99Tc have, however, been observed in S-, M-, and N- type stars, which makes finding 98Tc created naturally a possibility, and thus a search can be justified.


Penalized Functional Regression, Jeff Goldsmith, Jennifer Feder, Ciprian M. Crainiceanu, Brian Caffo, Daniel Reich 2010 Johns Hopkins Bloomberg School of Public Health, Department of Biostatistics

Penalized Functional Regression, Jeff Goldsmith, Jennifer Feder, Ciprian M. Crainiceanu, Brian Caffo, Daniel Reich

Johns Hopkins University, Dept. of Biostatistics Working Papers

We develop fast fitting methods for generalized functional linear models. An undersmooth of the functional predictor is obtained by projecting on a large number of smooth eigenvectors and the coefficient function is estimated using penalized spline regression. Our method can be applied to many functional data designs including functions measured with and without error, sparsely or densely sampled. The methods also extend to the case of multiple functional predictors or functional predictors with a natural multilevel structure. Our approach can be implemented using standard mixed effects software and is computationally fast. Our methodology is motivated by a diffusion tensor imaging …


Regression Adjustment And Stratification By Propensty Score In Treatment Effect Estimation, Jessica A. Myers, Thomas A. Louis 2010 Johns Hopkins Bloomberg School of Public Health, Department of Biostatistics

Regression Adjustment And Stratification By Propensty Score In Treatment Effect Estimation, Jessica A. Myers, Thomas A. Louis

Johns Hopkins University, Dept. of Biostatistics Working Papers

Propensity score adjustment of effect estimates in observational studies of treatment is a common technique used to control for bias in treatment assignment. In situations where matching on propensity score is not possible or desirable, regression adjustment and stratification are two options. Regression adjustment is used most often and can be highly efficient, but it can lead to biased results when model assumptions are violated. Validity of the stratification approach depends on fewer model assumptions, but is less efficient than regression adjustment when the regression assumptions hold. To investigate these issues, by simulation we compare stratification and regression adjustments. We …


Dynamic Model Pooling Methodology For Improving Aberration Detection Algorithms, Brenton J. Sellati 2010 University of Massachusetts Amherst

Dynamic Model Pooling Methodology For Improving Aberration Detection Algorithms, Brenton J. Sellati

Masters Theses 1911 - February 2014

Syndromic surveillance is defined generally as the collection and statistical analysis of data which are believed to be leading indicators for the presence of deleterious activities developing within a system. Conceptually, syndromic surveillance can be applied to any discipline in which it is important to know when external influences manifest themselves in a system by forcing it to depart from its baseline. Comparing syndromic surveillance systems have led to mixed results, where models that dominate in one performance metric are often sorely deficient in another. This results in a zero-sum trade off where one performance metric must be afforded greater …


Wavelet-Based Functional Linear Mixed Models: An Application To Measurement Error–Corrected Distributed Lag Models, Elizabeth J. Malloy, Jeffrey S. Morris, Sara D. Adar, Helen Suh, Diane R. Gold, Brent A. Coull 2010 American University

Wavelet-Based Functional Linear Mixed Models: An Application To Measurement Error–Corrected Distributed Lag Models, Elizabeth J. Malloy, Jeffrey S. Morris, Sara D. Adar, Helen Suh, Diane R. Gold, Brent A. Coull

Jeffrey S. Morris

Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient …


Members’ Discoveries: Fatal Flaws In Cancer Research, Jeffrey S. Morris 2010 The University of Texas M.D. Anderson Cancer Center

Members’ Discoveries: Fatal Flaws In Cancer Research, Jeffrey S. Morris

Jeffrey S. Morris

A recent article published in The Annals of Applied Statistics (AOAS) by two MD Anderson researchers—Keith Baggerly and Kevin Coombes—dissects results from a highly-influential series of medical papers involving genomics-driven personalized cancer therapy, and outlines a series of simple yet fatal flaws that raises serious questions about the veracity of the original results. Having immediate and strong impact, this paper, along with related work, is providing the impetus for new standards of reproducibility in scientific research.


Statistical Contributions To Proteomic Research, Jeffrey S. Morris, Keith A. Baggerly, Howard B. Gutstein, Kevin R. Coombes 2010 The University of Texas M.D. Anderson Cancer Center

Statistical Contributions To Proteomic Research, Jeffrey S. Morris, Keith A. Baggerly, Howard B. Gutstein, Kevin R. Coombes

Jeffrey S. Morris

Proteomic profiling has the potential to impact the diagnosis, prognosis, and treatment of various diseases. A number of different proteomic technologies are available that allow us to look at many proteins at once, and all of them yield complex data that raise significant quantitative challenges. Inadequate attention to these quantitative issues can prevent these studies from achieving their desired goals, and can even lead to invalid results. In this chapter, we describe various ways the involvement of statisticians or other quantitative scientists in the study team can contribute to the success of proteomic research, and we outline some of the …


Informatics And Statistics For Analyzing 2-D Gel Electrophoresis Images, Andrew W. Dowsey, Jeffrey S. Morris, Howard G. Gutstein, Guang Z. Yang 2010 Imperial College London

Informatics And Statistics For Analyzing 2-D Gel Electrophoresis Images, Andrew W. Dowsey, Jeffrey S. Morris, Howard G. Gutstein, Guang Z. Yang

Jeffrey S. Morris

Whilst recent progress in ‘shotgun’ peptide separation by integrated liquid chromatography and mass spectrometry (LC/MS) has enabled its use as a sensitive analytical technique, proteome coverage and reproducibility is still limited and obtaining enough replicate runs for biomarker discovery is a challenge. For these reasons, recent research demonstrates the continuing need for protein separation by two-dimensional gel electrophoresis (2-DE). However, with traditional 2-DE informatics, the digitized images are reduced to symbolic data though spot detection and quantification before proteins are compared for differential expression by spot matching. Recently, a more robust and automated paradigm has emerged where gels are directly …


Bayesian Random Segmentationmodels To Identify Shared Copy Number Aberrations For Array Cgh Data, Veerabhadran Baladandayuthapani, Yuan Ji, Rajesh Talluri, Luis E. Nieto-Barajas, Jeffrey S. Morris 2010 Texas A&M University

Bayesian Random Segmentationmodels To Identify Shared Copy Number Aberrations For Array Cgh Data, Veerabhadran Baladandayuthapani, Yuan Ji, Rajesh Talluri, Luis E. Nieto-Barajas, Jeffrey S. Morris

Jeffrey S. Morris

Array-based comparative genomic hybridization (aCGH) is a high-resolution high-throughput technique for studying the genetic basis of cancer. The resulting data consists of log fluorescence ratios as a function of the genomic DNA location and provides a cytogenetic representation of the relative DNA copy number variation. Analysis of such data typically involves estimation of the underlying copy number state at each location and segmenting regions of DNA with similar copy number states. Most current methods proceed by modeling a single sample/array at a time, and thus fail to borrow strength across multiple samples to infer shared regions of copy number aberrations. …


Simulating Multivariate G-And-H Distributions, Rhonda K. Kowalchuk, Todd C. Headrick 2010 Southern Illinois University Carbondale

Simulating Multivariate G-And-H Distributions, Rhonda K. Kowalchuk, Todd C. Headrick

Todd Christopher Headrick

The Tukey family of g-and-h distributions is often used to model univariate real-world data. There is a paucity of research demonstrating appropriate multivariate data generation using the g-and-h family of distributions with specified correlations. Therefore, the methodology and algorithms are presented to extend the g-and-h family from univariate to multivariate data generation. An example is provided along with a Monte Carlo simulation demonstrating the methodology. In addition, algorithms written in Mathematica 7.0 are available from the authors for implementing the procedure.


Digital Commons powered by bepress