Open Access. Powered by Scholars. Published by Universities.®

Statistical Methodology Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 8 of 8

Full-Text Articles in Statistical Methodology

Comparison Of Hazard, Odds And Risk Ratio In The Two-Sample Survival Problem, Benedict P. Dormitorio Aug 2014

Comparison Of Hazard, Odds And Risk Ratio In The Two-Sample Survival Problem, Benedict P. Dormitorio

Dissertations

Cox proportional hazards is the standard method for analyzing treatment efficacy when time-to-event data is available. In the absence of time-to-event, investigators may use logistic regression which only requires relative frequencies of events, or Poisson regression which requires only interval-summarized frequency tables of time-to-event. When event frequencies are used instead of time-to-events, does it always result in a loss in power?

We investigate the relative performance of the three methods. In particular, we compare the power of tests based on the respective effect-size estimates (1)hazard ratio (HR), (2)odds ratio (OR), and (3)risk ratio (RR). We use a variety of survival …


Survival Prediction For Brain Tumor Patients Using Gene Expression Data, Vinicius Bonato May 2010

Survival Prediction For Brain Tumor Patients Using Gene Expression Data, Vinicius Bonato

Dissertations & Theses (Open Access)

Brain tumor is one of the most aggressive types of cancer in humans, with an estimated median survival time of 12 months and only 4% of the patients surviving more than 5 years after disease diagnosis. Until recently, brain tumor prognosis has been based only on clinical information such as tumor grade and patient age, but there are reports indicating that molecular profiling of gliomas can reveal subgroups of patients with distinct survival rates. We hypothesize that coupling molecular profiling of brain tumors with clinical information might improve predictions of patient survival time and, consequently, better guide future treatment decisions. …


A Note On Targeted Maximum Likelihood And Right Censored Data, Mark J. Van Der Laan, Daniel Rubin Oct 2007

A Note On Targeted Maximum Likelihood And Right Censored Data, Mark J. Van Der Laan, Daniel Rubin

U.C. Berkeley Division of Biostatistics Working Paper Series

A popular way to estimate an unknown parameter is with substitution, or evaluating the parameter at a likelihood based fit of the data generating density. In many cases, such estimators have substantial bias and can fail to converge at the parametric rate. van der Laan and Rubin (2006) introduced targeted maximum likelihood learning, removing these shackles from substitution estimators, which were made in full agreement with the locally efficient estimating equation procedures as presented in Robins and Rotnitzsky (1992) and van der Laan and Robins (2003). This note illustrates how targeted maximum likelihood can be applied in right censored data …


Empirical Efficiency Maximization, Daniel B. Rubin, Mark J. Van Der Laan Jul 2007

Empirical Efficiency Maximization, Daniel B. Rubin, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

It has long been recognized that covariate adjustment can increase precision, even when it is not strictly necessary. The phenomenon is particularly emphasized in clinical trials, whether using continuous, categorical, or censored time-to-event outcomes. Adjustment is often straightforward when a discrete covariate partitions the sample into a handful of strata, but becomes more involved when modern studies collect copious amounts of baseline information on each subject.

The dilemma helped motivate locally efficient estimation for coarsened data structures, as surveyed in the books of van der Laan and Robins (2003) and Tsiatis (2006). Here one fits a relatively small working model …


Semiparametric Quantitative-Trait-Locus Mapping: Ii. On Censored Age-At-Onset, Ying Qing Chen, Chengcheng Hu, Rongling Wu Jul 2004

Semiparametric Quantitative-Trait-Locus Mapping: Ii. On Censored Age-At-Onset, Ying Qing Chen, Chengcheng Hu, Rongling Wu

U.C. Berkeley Division of Biostatistics Working Paper Series

In genetic studies, the variation in genotypes may not only affect different inheritance patterns in qualitative traits, but may also affect the age-at-onset as quantitative trait. In this article, we use standard cross designs, such as backcross or F2, to propose some hazard regression models, namely, the additive hazards model in quantitative trait loci mapping for age-at-onset, although the developed method can be extended to more complex designs. With additive invariance of the additive hazards models in mixture probabilities, we develop flexible semiparametric methodologies in interval regression mapping without heavy computing burden. A recently developed multiple comparison procedures is adapted …


Loss-Based Estimation With Cross-Validation: Applications To Microarray Data Analysis And Motif Finding, Sandrine Dudoit, Mark J. Van Der Laan, Sunduz Keles, Annette M. Molinaro, Sandra E. Sinisi, Siew Leng Teng Dec 2003

Loss-Based Estimation With Cross-Validation: Applications To Microarray Data Analysis And Motif Finding, Sandrine Dudoit, Mark J. Van Der Laan, Sunduz Keles, Annette M. Molinaro, Sandra E. Sinisi, Siew Leng Teng

U.C. Berkeley Division of Biostatistics Working Paper Series

Current statistical inference problems in genomic data analysis involve parameter estimation for high-dimensional multivariate distributions, with typically unknown and intricate correlation patterns among variables. Addressing these inference questions satisfactorily requires: (i) an intensive and thorough search of the parameter space to generate good candidate estimators, (ii) an approach for selecting an optimal estimator among these candidates, and (iii) a method for reliably assessing the performance of the resulting estimator. We propose a unified loss-based methodology for estimator construction, selection, and performance assessment with cross-validation. In this approach, the parameter of interest is defined as the risk minimizer for a suitable …


Statistical Inference For Infinite Dimensional Parameters Via Asymptotically Pivotal Estimating Functions, Meredith A. Goldwasser, Lu Tian, L. J. Wei Nov 2003

Statistical Inference For Infinite Dimensional Parameters Via Asymptotically Pivotal Estimating Functions, Meredith A. Goldwasser, Lu Tian, L. J. Wei

Harvard University Biostatistics Working Paper Series

No abstract provided.


Tree-Based Multivariate Regression And Density Estimation With Right-Censored Data , Annette M. Molinaro, Sandrine Dudoit, Mark J. Van Der Laan Sep 2003

Tree-Based Multivariate Regression And Density Estimation With Right-Censored Data , Annette M. Molinaro, Sandrine Dudoit, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

We propose a unified strategy for estimator construction, selection, and performance assessment in the presence of censoring. This approach is entirely driven by the choice of a loss function for the full (uncensored) data structure and can be stated in terms of the following three main steps. (1) Define the parameter of interest as the minimizer of the expected loss, or risk, for a full data loss function chosen to represent the desired measure of performance. Map the full data loss function into an observed (censored) data loss function having the same expected value and leading to an efficient estimator …