Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 30 of 178

Full-Text Articles in Physical Sciences and Mathematics

Analysis Of Covariance (Ancova) In Randomized Trials: More Precision, Less Conditional Bias, And Valid Confidence Intervals, Without Model Assumptions, Bingkai Wang, Elizabeth Ogburn, Michael Rosenblum Oct 2018

Analysis Of Covariance (Ancova) In Randomized Trials: More Precision, Less Conditional Bias, And Valid Confidence Intervals, Without Model Assumptions, Bingkai Wang, Elizabeth Ogburn, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

Covariate adjustment" in the randomized trial context refers to an estimator of the average treatment effect that adjusts for chance imbalances between study arms in baseline variables (called “covariates"). The baseline variables could include, e.g., age, sex, disease severity, and biomarkers. According to two surveys of clinical trial reports, there is confusion about the statistical properties of covariate adjustment. We focus on the ANCOVA estimator, which involves fitting a linear model for the outcome given the treatment arm and baseline variables, and trials with equal probability of assignment to treatment and control. We prove the following new (to the best …


Robust Estimation Of The Average Treatment Effect In Alzheimer's Disease Clinical Trials, Michael Rosenblum, Aidan Mcdermont, Elizabeth Colantuoni Mar 2018

Robust Estimation Of The Average Treatment Effect In Alzheimer's Disease Clinical Trials, Michael Rosenblum, Aidan Mcdermont, Elizabeth Colantuoni

Johns Hopkins University, Dept. of Biostatistics Working Papers

The primary analysis of Alzheimer's disease clinical trials often involves a mixed-model repeated measure (MMRM) approach. We consider another estimator of the average treatment effect, called targeted minimum loss based estimation (TMLE). This estimator is more robust to violations of assumptions about missing data than MMRM.

We compare TMLE versus MMRM by analyzing data from a completed Alzheimer's disease trial data set and by simulation studies. The simulations involved different missing data distributions, where loss to followup at a given visit could depend on baseline variables, treatment assignment, and the outcome measured at previous visits. The TMLE generally has improved …


Optimized Adaptive Enrichment Designs For Multi-Arm Trials: Learning Which Subpopulations Benefit From Different Treatments, Jon Arni Steingrimsson, Joshua Betz, Tiachen Qian, Michael Rosenblum Jan 2018

Optimized Adaptive Enrichment Designs For Multi-Arm Trials: Learning Which Subpopulations Benefit From Different Treatments, Jon Arni Steingrimsson, Joshua Betz, Tiachen Qian, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

We consider the problem of designing a randomized trial for comparing two treatments versus a common control in two disjoint subpopulations. The subpopulations could be defined in terms of a biomarker or disease severity measured at baseline. The goal is to determine which treatments benefit which subpopulations. We develop a new class of adaptive enrichment designs tailored to solving this problem. Adaptive enrichment designs involve a preplanned rule for modifying enrollment based on accruing data in an ongoing trial. The proposed designs have preplanned rules for stopping accrual of treatment by subpopulation combinations, either for efficacy or futility. The motivation …


Phase Ii Adaptive Enrichment Design To Determine The Population To Enroll In Phase Iii Trials, By Selecting Thresholds For Baseline Disease Severity, Yu Du, Gary L. Rosner, Michael Rosenblum Jan 2018

Phase Ii Adaptive Enrichment Design To Determine The Population To Enroll In Phase Iii Trials, By Selecting Thresholds For Baseline Disease Severity, Yu Du, Gary L. Rosner, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

We propose and evaluate a two-stage, phase 2, adaptive clinical trial design. Its goal is to determine whether future phase 3 (confirmatory) trials should be conducted, and if so, which population should be enrolled. The population selected for phase 3 enrollment is defined in terms of a disease severity score measured at baseline. We optimize the phase 2 trial design and analysis in a decision theory framework. Our utility function represents a combination of the cost of conducting phase 3 trials and, if the phase 3 trials are successful, the improved health of the future population minus the cost of …


Constructing A Confidence Interval For The Fraction Who Benefit From Treatment, Using Randomized Trial Data, Emily J. Huang, Ethan X. Fang, Daniel F. Hanley, Michael Rosenblum Oct 2017

Constructing A Confidence Interval For The Fraction Who Benefit From Treatment, Using Randomized Trial Data, Emily J. Huang, Ethan X. Fang, Daniel F. Hanley, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

The fraction who benefit from treatment is the proportion of patients whose potential outcome under treatment is better than that under control. Inference on this parameter is challenging since it is only partially identifiable, even in our context of a randomized trial. We propose a new method for constructing a confidence interval for the fraction, when the outcome is ordinal or binary. Our confidence interval procedure is pointwise consistent. It does not require any assumptions about the joint distribution of the potential outcomes, although it has the flexibility to incorporate various user-defined assumptions. Unlike existing confidence interval methods for partially …


Comparison Of Adaptive Randomized Trial Designs For Time-To-Event Outcomes That Expand Versus Restrict Enrollment Criteria, To Test Non-Inferiority, Josh Betz, Jon Arni Steingrimsson, Tianchen Qian, Michael Rosenblum Sep 2017

Comparison Of Adaptive Randomized Trial Designs For Time-To-Event Outcomes That Expand Versus Restrict Enrollment Criteria, To Test Non-Inferiority, Josh Betz, Jon Arni Steingrimsson, Tianchen Qian, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

Adaptive enrichment designs involve preplanned rules for modifying patient enrollment criteria based on data accrued in an ongoing trial. These designs may be useful when it is suspected that a subpopulation, e.g., defined by a biomarker or risk score measured at baseline, may benefit more from treatment than the complementary subpopulation. We compare two types of such designs, for the case of two subpopulations that partition the overall population. The first type starts by enrolling the subpopulation where it is suspected the new treatment is most likely to work, and then may expand inclusion criteria if there is early evidence …


Optimal, Two Stage, Adaptive Enrichment Designs For Randomized Trials Using Sparse Linear Programming, Michael Rosenblum, Xingyuan Fang, Han Liu Jun 2017

Optimal, Two Stage, Adaptive Enrichment Designs For Randomized Trials Using Sparse Linear Programming, Michael Rosenblum, Xingyuan Fang, Han Liu

Johns Hopkins University, Dept. of Biostatistics Working Papers

Adaptive enrichment designs involve preplanned rules for modifying enrollment criteria based on accruing data in a randomized trial. We focus on designs where the overall population is partitioned into two predefined subpopulations, e.g., based on a biomarker or risk score measured at baseline. The goal is to learn which populations benefit from an experimental treatment. Two critical components of adaptive enrichment designs are the decision rule for modifying enrollment, and the multiple testing procedure. We provide a general method for simultaneously optimizing these components for two stage, adaptive enrichment designs. We minimize the expected sample size under constraints on power …


Estimating Autoantibody Signatures To Detect Autoimmune Disease Patient Subsets, Zhenke Wu, Livia Casciola-Rosen, Ami A. Shah, Antony Rosen, Scott L. Zeger Apr 2017

Estimating Autoantibody Signatures To Detect Autoimmune Disease Patient Subsets, Zhenke Wu, Livia Casciola-Rosen, Ami A. Shah, Antony Rosen, Scott L. Zeger

Johns Hopkins University, Dept. of Biostatistics Working Papers

Autoimmune diseases are characterized by highly specific immune responses against molecules in self-tissues. Different autoimmune diseases are characterized by distinct immune responses, making autoantibodies useful for diagnosis and prediction. In many diseases, the targets of autoantibodies are incompletely defined. Although the technologies for autoantibody discovery have advanced dramatically over the past decade, each of these techniques generates hundreds of possibilities, which are onerous and expensive to validate. We set out to establish a method to greatly simplify autoantibody discovery, using a pre-filtering step to define subgroups with similar specificities based on migration of labeled, immunoprecipitated proteins on sodium dodecyl sulfate …


It's All About Balance: Propensity Score Matching In The Context Of Complex Survey Data, David Lenis, Trang Q. ;Nguyen, Nian Dong, Elizabeth A. Stuart Feb 2017

It's All About Balance: Propensity Score Matching In The Context Of Complex Survey Data, David Lenis, Trang Q. ;Nguyen, Nian Dong, Elizabeth A. Stuart

Johns Hopkins University, Dept. of Biostatistics Working Papers

Many research studies aim to draw causal inferences using data from large, nationally representative survey samples, and many of these studies use propensity score matching to make those causal inferences as rigorous as possible given the non-experimental nature of the data. However, very few applied studies are careful about incorporating the survey design with the propensity score analysis, which may mean that the results don’t generate population inferences. This may be because few methodological studies examine how to best combine these methods. Furthermore, even fewer of the methodological studies incorporate different non-response mechanisms in their analysis. This study examines methods …


Improving Power In Group Sequential, Randomized Trials By Adjusting For Prognostic Baseline Variables And Short-Term Outcomes, Tianchen Qian, Michael Rosenblum, Huitong Qiu Dec 2016

Improving Power In Group Sequential, Randomized Trials By Adjusting For Prognostic Baseline Variables And Short-Term Outcomes, Tianchen Qian, Michael Rosenblum, Huitong Qiu

Johns Hopkins University, Dept. of Biostatistics Working Papers

In group sequential designs, adjusting for baseline variables and short-term outcomes can lead to increased power and reduced sample size. We derive formulas for the precision gain from such variable adjustment using semiparametric estimators for the average treatment effect, and give new results on what conditions lead to substantial power gains and sample size reductions. The formulas reveal how the impact of prognostic variables on the precision gain is modified by the number of pipeline participants, analysis timing, enrollment rate, and treatment effect heterogeneity, when the semiparametric estimator uses correctly specified models. Given set prognostic value of baseline variables and …


Stochastic Optimization Of Adaptive Enrichment Designs For Two Subpopulations, Aaron Fisher, Michael Rosenblum Dec 2016

Stochastic Optimization Of Adaptive Enrichment Designs For Two Subpopulations, Aaron Fisher, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

An adaptive enrichment design is a randomized trial that allows enrollment criteria to be modified at interim analyses, based on a preset decision rule. When there is prior uncertainty regarding treatment effect heterogeneity, these trial designs can provide improved power for detecting treatment effects in subpopulations. We present a simulated annealing approach to search over the space of decision rules and other parameters for an adaptive enrichment design. The goal is to minimize the expected number enrolled or expected duration, while preserving the appropriate power and Type I error rate. We also explore the benefits of parallel computation in the …


Using Sensitivity Analyses For Unobserved Confounding To Address Covariate Measurement Error In Propensity Score Methods, Kara E. Rudolph, Elizabeth A. Stuart Nov 2016

Using Sensitivity Analyses For Unobserved Confounding To Address Covariate Measurement Error In Propensity Score Methods, Kara E. Rudolph, Elizabeth A. Stuart

Johns Hopkins University, Dept. of Biostatistics Working Papers

Propensity score methods are a popular tool to control for confounding in observational data, but their bias-reduction properties are threatened by covariate measurement error. There are few easy-to-implement methods to correct for such bias. We describe and demonstrate how existing sensitivity analyses for unobserved confounding---propensity score calibration, Vanderweele and Arah's bias formulas, and Rosenbaum's sensitivity analysis---can be adapted to address this problem. In a simulation study, we examined the extent to which these sensitivity analyses can correct for several measurement error structures: classical, systematic differential, and heteroscedastic covariate measurement error. We then apply these approaches to address covariate measurement error …


Censoring Unbiased Regression Trees And Ensembles, Jon Arni Steingrimsson, Liqun Diao, Robert L. Strawderman Oct 2016

Censoring Unbiased Regression Trees And Ensembles, Jon Arni Steingrimsson, Liqun Diao, Robert L. Strawderman

Johns Hopkins University, Dept. of Biostatistics Working Papers

This paper proposes a novel approach to building regression trees and ensemble learning in survival analysis. By first extending the theory of censoring unbiased transformations, we construct observed data estimators of full data loss functions in cases where responses can be right censored. This theory is used to construct two specific classes of methods for building regression trees and regression ensembles that respectively make use of Buckley-James and doubly robust estimating equations for a given full data risk function. For the particular case of squared error loss, we further show how to implement these algorithms using existing software (e.g., CART, …


Matching The Efficiency Gains Of The Logistic Regression Estimator While Avoiding Its Interpretability Problems, In Randomized Trials, Michael Rosenblum, Jon Arni Steingrimsson Oct 2016

Matching The Efficiency Gains Of The Logistic Regression Estimator While Avoiding Its Interpretability Problems, In Randomized Trials, Michael Rosenblum, Jon Arni Steingrimsson

Johns Hopkins University, Dept. of Biostatistics Working Papers

Adjusting for prognostic baseline variables can lead to improved power in randomized trials. For binary outcomes, a logistic regression estimator is commonly used for such adjustment. This has resulted in substantial efficiency gains in practice, e.g., gains equivalent to reducing the required sample size by 20-28% were observed in a recent survey of traumatic brain injury trials. Robinson and Jewell (1991) proved that the logistic regression estimator is guaranteed to have equal or better asymptotic efficiency compared to the unadjusted estimator (which ignores baseline variables). Unfortunately, the logistic regression estimator has the following dangerous vulnerabilities: it is only interpretable when …


Improving Precision By Adjusting For Baseline Variables In Randomized Trials With Binary Outcomes, Without Regression Model Assumptions, Jon Arni Steingrimsson, Daniel F. Hanley, Michael Rosenblum Aug 2016

Improving Precision By Adjusting For Baseline Variables In Randomized Trials With Binary Outcomes, Without Regression Model Assumptions, Jon Arni Steingrimsson, Daniel F. Hanley, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

In randomized clinical trials with baseline variables that are prognostic for the primary outcome, there is potential to improve precision and reduce sample size by appropriately adjusting for these variables. A major challenge is that there are multiple statistical methods to adjust for baseline variables, but little guidance on which is best to use in a given context. The choice of method can have important consequences. For example, one commonly used method leads to uninterpretable estimates if there is any treatment effect heterogeneity, which would jeopardize the validity of trial conclusions. We give practical guidance on how to avoid this …


Sensitivity Of Trial Performance To Delay Outcomes, Accrual Rates, And Prognostic Variables Based On A Simulated Randomized Trial With Adaptive Enrichment, Tiachen Qian, Elizabeth Colantuoni, Aaron Fisher, Michael Rosenblum Aug 2016

Sensitivity Of Trial Performance To Delay Outcomes, Accrual Rates, And Prognostic Variables Based On A Simulated Randomized Trial With Adaptive Enrichment, Tiachen Qian, Elizabeth Colantuoni, Aaron Fisher, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

Adaptive enrichment designs involve rules for restricting enrollment to a subset of the population during the course of an ongoing trial. This can be used to target those who benefit from the experimental treatment. To leverage prognostic information in baseline variables and short-term outcomes, we use a semiparametric, locally efficient estimator, and investigate its strengths and limitations compared to standard estimators. Through simulation studies, we assess how sensitive the trial performance (Type I error, power, expected sample size, trial duration) is to different design characteristics. Our simulation distributions mimic features of data from the Alzheimer’s Disease Neuroimaging Initiative, and involve …


Inequality In Treatment Benefits: Can We Determine If A New Treatment Benefits The Many Or The Few?, Emily Huang, Ethan Fang, Daniel Hanley, Michael Rosenblum Dec 2015

Inequality In Treatment Benefits: Can We Determine If A New Treatment Benefits The Many Or The Few?, Emily Huang, Ethan Fang, Daniel Hanley, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

The primary analysis in many randomized controlled trials focuses on the average treatment effect and does not address whether treatment benefits are widespread or limited to a select few. This problem affects many disease areas, since it stems from how randomized trials, often the gold standard for evaluating treatments, are designed and analyzed. Our goal is to learn about the fraction who benefit from a treatment, based on randomized trial data. We consider the case where the outcome is ordinal, with binary outcomes as a special case. In general, the fraction who benefit is a non-identifiable parameter, and the best …


Nested Partially-Latent, Class Models For Dependent Binary Data, Estimating Disease Etiology, Zhenke Wu, Maria Deloria-Knoll, Scott L. Zeger Nov 2015

Nested Partially-Latent, Class Models For Dependent Binary Data, Estimating Disease Etiology, Zhenke Wu, Maria Deloria-Knoll, Scott L. Zeger

Johns Hopkins University, Dept. of Biostatistics Working Papers

The Pneumonia Etiology Research for Child Health (PERCH) study seeks to use modern measurement technology to infer the causes of pneumonia for which gold-standard evidence is unavailable. The paper describes a latent variable model designed to infer from case-control data the etiology distribution for the population of cases, and for an individual case given his or her measurements. We assume each observation is drawn from a mixture model for which each component represents one cause or disease class. The model addresses a major limitation of the traditional latent class approach by taking account of residual dependence among multivariate binary outcome …


Adaptive Enrichment Designs For Randomized Trials With Delayed Endpoints, Using Locally Efficient Estimators To Improve Precision, Michael Rosenblum, Tianchen Qian, Yu Du, Huitong Qiu Apr 2015

Adaptive Enrichment Designs For Randomized Trials With Delayed Endpoints, Using Locally Efficient Estimators To Improve Precision, Michael Rosenblum, Tianchen Qian, Yu Du, Huitong Qiu

Johns Hopkins University, Dept. of Biostatistics Working Papers

Adaptive enrichment designs involve preplanned rules for modifying enrollment criteria based on accrued data in an ongoing trial. For example, enrollment of a subpopulation where there is sufficient evidence of treatment efficacy, futility, or harm could be stopped, while enrollment for the remaining subpopulations is continued. Most existing methods for constructing adaptive enrichment designs are limited to situations where patient outcomes are observed soon after enrollment. This is a major barrier to the use of such designs in practice, since for many diseases the outcome of most clinical importance does not occur shortly after enrollment. We propose a new class …


Applying Multiple Imputation For External Calibration To Propensty Score Analysis, Yenny Webb-Vargas, Kara E. Rudolph, D. Lenis, Peter Murakami, Elizabeth A. Stuart Jan 2015

Applying Multiple Imputation For External Calibration To Propensty Score Analysis, Yenny Webb-Vargas, Kara E. Rudolph, D. Lenis, Peter Murakami, Elizabeth A. Stuart

Johns Hopkins University, Dept. of Biostatistics Working Papers

Although covariate measurement error is likely the norm rather than the exception, methods for handling covariate measurement error in propensity score methods have not been widely investigated. We consider a multiple imputation-based approach that uses an external calibration sample with information on the true and mismeasured covariates, Multiple Imputation for External Calibration (MI-EC), to correct for the measurement error, and investigate its performance using simulation studies. As expected, using the covariate measured with error leads to bias in the treatment effect estimate. In contrast, the MI-EC method can eliminate almost all the bias. We confirm that the outcome must be …


Adaptive, Group Sequential Designs That Balance The Benefits And Risks Of Wider Inclusion Criteria, Michael Rosenblum, Brandon S. Luber, Richard E. Thompson, Daniel F. Hanley Jan 2015

Adaptive, Group Sequential Designs That Balance The Benefits And Risks Of Wider Inclusion Criteria, Michael Rosenblum, Brandon S. Luber, Richard E. Thompson, Daniel F. Hanley

Johns Hopkins University, Dept. of Biostatistics Working Papers

We propose a new class of adaptive randomized trial designs aimed at gaining the advantages of wider generalizability and faster recruitment, while mitigating the risks of including a population for which there is greater a priori uncertainty. Our designs use adaptive enrichment, i.e., they have preplanned decision rules for modifying enrollment criteria based on data accrued at interim analyses. For example, enrollment can be restricted if the participants from predefined subpopulations are not benefiting from the new treatment. To the best of our knowledge, our designs are the first adaptive enrichment designs to have all of the following features: the …


Cross-Design Synthesis For Extending The Applicability Of Trial Evidence When Treatment Effect Is Heterogeneous-I. Methodology, Ravi Varadhan, Carlos Weiss Nov 2014

Cross-Design Synthesis For Extending The Applicability Of Trial Evidence When Treatment Effect Is Heterogeneous-I. Methodology, Ravi Varadhan, Carlos Weiss

Johns Hopkins University, Dept. of Biostatistics Working Papers

Randomized controlled trials (RCTs) provide reliable evidence for approval of new treatments, informing clinical practice, and coverage decisions. The participants in RCTs are often not a representative sample of the larger at-risk population. Hence it is argued that the average treatment effect from the trial is not generalizable to the larger at-risk population. An essential premise of this argument is that there is significant heterogeneity in the treatment effect (HTE). We present a new method to extrapolate the treatment effect from a trial to a target group that is inadequately represented in the trial, when HTE is present. Our method …


Cross-Design Synthesis For Extending The Applicability Of Trial Evidence When Treatment Effect Is Heterogeneous. Part Ii. Application And External Validation, Carlos Weiss, Ravi Varadhan Nov 2014

Cross-Design Synthesis For Extending The Applicability Of Trial Evidence When Treatment Effect Is Heterogeneous. Part Ii. Application And External Validation, Carlos Weiss, Ravi Varadhan

Johns Hopkins University, Dept. of Biostatistics Working Papers

Randomized controlled trials (RCTs) generally provide the most reliable evidence. When participants in RCTs are selected with respect to characteristics that are potential treatment effect modifiers, the average treatment effect from the trials may not be applicable to a specific target population. We present a new method to project the treatment effect from a RCT to a target group that is inadequately represented in the trial when there is heterogeneity in the treatment effect (HTE). The method integrates RCT and observational data through cross-design synthesis. An essential component is to identify HTE and a calibration factor for unmeasured confounding for …


Enhanced Precision In The Analysis Of Randomized Trials With Ordinal Outcomes, Iván Díaz, Elizabeth Colantuoni, Michael Rosenblum Oct 2014

Enhanced Precision In The Analysis Of Randomized Trials With Ordinal Outcomes, Iván Díaz, Elizabeth Colantuoni, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

We present a general method for estimating the effect of a treatment on an ordinal outcome in randomized trials. The method is robust in that it does not rely on the proportional odds assumption. Our estimator leverages information in prognostic baseline variables, and has all of the following properties: (i) it is consistent; (ii) it is locally efficient; (iii) it is guaranteed to match or improve the precision of the standard, unadjusted estimator. To the best of our knowledge, this is the first estimator of the causal relation between a treatment and an ordinal outcome to satisfy these properties. We …


A Bayesian Approach To Joint Modeling Of Menstrual Cycle Length And Fecundity, Kirsten J. Lum, Rajeshwari Sundaram, Germaine M. Buck-Louis, Thomas A. Louis Oct 2014

A Bayesian Approach To Joint Modeling Of Menstrual Cycle Length And Fecundity, Kirsten J. Lum, Rajeshwari Sundaram, Germaine M. Buck-Louis, Thomas A. Louis

Johns Hopkins University, Dept. of Biostatistics Working Papers

Female menstrual cycle length is thought to play an important role in couple fecundity, or the biologic capacity for reproduction irrespective of pregnancy intentions. A complete assessment of the association between menstrual cycle length and fecundity requires a model that accounts for multiple risk factors (both male and female) and the couple's intercourse pattern relative to ovulation. We employ a Bayesian joint model consisting of a mixed effects accelerated failure time model for longitudinal menstrual cycle lengths and a hierarchical model for the conditional probability of pregnancy in a menstrual cycle given no pregnancy in previous cycles of trying, in …


Cox Regression Models With Functional Covariates For Survival Data, Jonathan E. Gellar, Elizabeth Colantuoni, Dale M. Needham, Ciprian M. Crainiceanu Sep 2014

Cox Regression Models With Functional Covariates For Survival Data, Jonathan E. Gellar, Elizabeth Colantuoni, Dale M. Needham, Ciprian M. Crainiceanu

Johns Hopkins University, Dept. of Biostatistics Working Papers

We extend the Cox proportional hazards model to cases when the exposure is a densely sampled functional process, measured at baseline. The fundamental idea is to combine penalized signal regression with methods developed for mixed effects proportional hazards models. The model is fit by maximizing the penalized partial likelihood, with smoothing parameters estimated by a likelihood-based criterion such as AIC or EPIC. The model may be extended to allow for multiple functional predictors, time varying coefficients, and missing or unequally-spaced data. Methods were inspired by and applied to a study of the association between time to death after hospital discharge …


Interadapt -- An Interactive Tool For Designing And Evaluating Randomized Trials With Adaptive Enrollment Criteria, Aaron Joel Fisher, Harris Jaffee, Michael Rosenblum Jun 2014

Interadapt -- An Interactive Tool For Designing And Evaluating Randomized Trials With Adaptive Enrollment Criteria, Aaron Joel Fisher, Harris Jaffee, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

The interAdapt R package is designed to be used by statisticians and clinical investigators to plan randomized trials. It can be used to determine if certain adaptive designs offer tangible benefits compared to standard designs, in the context of investigators’ specific trial goals and constraints. Specifically, interAdapt compares the performance of trial designs with adaptive enrollment criteria versus standard (non-adaptive) group sequential trial designs. Performance is compared in terms of power, expected trial duration, and expected sample size. Users can either work directly in the R console, or with a user-friendly shiny application that requires no programming experience. Several added …


Targeted Maximum Likelihood Estimation Using Exponential Families, Iván Díaz, Michael Rosenblum Jun 2014

Targeted Maximum Likelihood Estimation Using Exponential Families, Iván Díaz, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

Targeted maximum likelihood estimation (TMLE) is a general method for estimating parameters in semiparametric and nonparametric models. Each iteration of TMLE involves fitting a parametric submodel that targets the parameter of interest. We investigate the use of exponential families to define the parametric submodel. This implementation of TMLE gives a general approach for estimating any smooth parameter in the nonparametric model. A computational advantage of this approach is that each iteration of TMLE involves estimation of a parameter in an exponential family, which is a convex optimization problem for which software implementing reliable and computationally efficient methods exists. We illustrate …


Partially-Latent Class Models (Plcm) For Case-Control Studies Of Childhood Pneumonia Etiology, Zhenke Wu, Maria Deloria-Knoll, Laura L. Hammitt, Scott L. Zeger May 2014

Partially-Latent Class Models (Plcm) For Case-Control Studies Of Childhood Pneumonia Etiology, Zhenke Wu, Maria Deloria-Knoll, Laura L. Hammitt, Scott L. Zeger

Johns Hopkins University, Dept. of Biostatistics Working Papers

In population studies on the etiology of disease, one goal is the estimation of the fraction of cases attributable to each of several causes. For example, pneumonia is a clinical diagnosis of lung infection that may be caused by viral, bacterial, fungal, or other pathogens. The study of pneumonia etiology is challenging because directly sampling from the lung to identify the etiologic pathogen is not standard clinical practice in most settings. Instead, measurements from multiple peripheral specimens are made. This paper considers the problem of estimating the population etiology distribution and the individual etiology probabilities. We formulate the scientific …


Variable-Domain Functional Regression For Modeling Icu Data, Jonathan E. Gellar, Elizabeth Colantuoni, Dale M. Needham, Ciprian M. Crainiceanu May 2014

Variable-Domain Functional Regression For Modeling Icu Data, Jonathan E. Gellar, Elizabeth Colantuoni, Dale M. Needham, Ciprian M. Crainiceanu

Johns Hopkins University, Dept. of Biostatistics Working Papers

We introduce a class of scalar-on-function regression models with subject-specific functional predictor domains. The fundamental idea is to consider a bivariate functional parameter that depends both on the functional argument and on the width of the functional predictor domain. Both parametric and nonparametric models are introduced to fit the functional coefficient. The nonparametric model is theoretically and practically invariant to functional support transformation, or support registration. Methods were motivated by and applied to a study of association between daily measures of the Intensive Care Unit (ICU) Sequential Organ Failure Assessment (SOFA) score and two outcomes: in-hospital mortality, and physical impairment …