Open Access. Powered by Scholars. Published by Universities.®

Statistical Methodology Commons

Open Access. Powered by Scholars. Published by Universities.®

Johns Hopkins University, Dept. of Biostatistics Working Papers

Discipline
Keyword
Publication Year

Articles 1 - 30 of 59

Full-Text Articles in Statistical Methodology

Analysis Of Covariance (Ancova) In Randomized Trials: More Precision, Less Conditional Bias, And Valid Confidence Intervals, Without Model Assumptions, Bingkai Wang, Elizabeth Ogburn, Michael Rosenblum Oct 2018

Analysis Of Covariance (Ancova) In Randomized Trials: More Precision, Less Conditional Bias, And Valid Confidence Intervals, Without Model Assumptions, Bingkai Wang, Elizabeth Ogburn, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

Covariate adjustment" in the randomized trial context refers to an estimator of the average treatment effect that adjusts for chance imbalances between study arms in baseline variables (called “covariates"). The baseline variables could include, e.g., age, sex, disease severity, and biomarkers. According to two surveys of clinical trial reports, there is confusion about the statistical properties of covariate adjustment. We focus on the ANCOVA estimator, which involves fitting a linear model for the outcome given the treatment arm and baseline variables, and trials with equal probability of assignment to treatment and control. We prove the following new (to the best …


Robust Estimation Of The Average Treatment Effect In Alzheimer's Disease Clinical Trials, Michael Rosenblum, Aidan Mcdermont, Elizabeth Colantuoni Mar 2018

Robust Estimation Of The Average Treatment Effect In Alzheimer's Disease Clinical Trials, Michael Rosenblum, Aidan Mcdermont, Elizabeth Colantuoni

Johns Hopkins University, Dept. of Biostatistics Working Papers

The primary analysis of Alzheimer's disease clinical trials often involves a mixed-model repeated measure (MMRM) approach. We consider another estimator of the average treatment effect, called targeted minimum loss based estimation (TMLE). This estimator is more robust to violations of assumptions about missing data than MMRM.

We compare TMLE versus MMRM by analyzing data from a completed Alzheimer's disease trial data set and by simulation studies. The simulations involved different missing data distributions, where loss to followup at a given visit could depend on baseline variables, treatment assignment, and the outcome measured at previous visits. The TMLE generally has improved …


Optimized Adaptive Enrichment Designs For Multi-Arm Trials: Learning Which Subpopulations Benefit From Different Treatments, Jon Arni Steingrimsson, Joshua Betz, Tiachen Qian, Michael Rosenblum Jan 2018

Optimized Adaptive Enrichment Designs For Multi-Arm Trials: Learning Which Subpopulations Benefit From Different Treatments, Jon Arni Steingrimsson, Joshua Betz, Tiachen Qian, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

We consider the problem of designing a randomized trial for comparing two treatments versus a common control in two disjoint subpopulations. The subpopulations could be defined in terms of a biomarker or disease severity measured at baseline. The goal is to determine which treatments benefit which subpopulations. We develop a new class of adaptive enrichment designs tailored to solving this problem. Adaptive enrichment designs involve a preplanned rule for modifying enrollment based on accruing data in an ongoing trial. The proposed designs have preplanned rules for stopping accrual of treatment by subpopulation combinations, either for efficacy or futility. The motivation …


Phase Ii Adaptive Enrichment Design To Determine The Population To Enroll In Phase Iii Trials, By Selecting Thresholds For Baseline Disease Severity, Yu Du, Gary L. Rosner, Michael Rosenblum Jan 2018

Phase Ii Adaptive Enrichment Design To Determine The Population To Enroll In Phase Iii Trials, By Selecting Thresholds For Baseline Disease Severity, Yu Du, Gary L. Rosner, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

We propose and evaluate a two-stage, phase 2, adaptive clinical trial design. Its goal is to determine whether future phase 3 (confirmatory) trials should be conducted, and if so, which population should be enrolled. The population selected for phase 3 enrollment is defined in terms of a disease severity score measured at baseline. We optimize the phase 2 trial design and analysis in a decision theory framework. Our utility function represents a combination of the cost of conducting phase 3 trials and, if the phase 3 trials are successful, the improved health of the future population minus the cost of …


Constructing A Confidence Interval For The Fraction Who Benefit From Treatment, Using Randomized Trial Data, Emily J. Huang, Ethan X. Fang, Daniel F. Hanley, Michael Rosenblum Oct 2017

Constructing A Confidence Interval For The Fraction Who Benefit From Treatment, Using Randomized Trial Data, Emily J. Huang, Ethan X. Fang, Daniel F. Hanley, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

The fraction who benefit from treatment is the proportion of patients whose potential outcome under treatment is better than that under control. Inference on this parameter is challenging since it is only partially identifiable, even in our context of a randomized trial. We propose a new method for constructing a confidence interval for the fraction, when the outcome is ordinal or binary. Our confidence interval procedure is pointwise consistent. It does not require any assumptions about the joint distribution of the potential outcomes, although it has the flexibility to incorporate various user-defined assumptions. Unlike existing confidence interval methods for partially …


Comparison Of Adaptive Randomized Trial Designs For Time-To-Event Outcomes That Expand Versus Restrict Enrollment Criteria, To Test Non-Inferiority, Josh Betz, Jon Arni Steingrimsson, Tianchen Qian, Michael Rosenblum Sep 2017

Comparison Of Adaptive Randomized Trial Designs For Time-To-Event Outcomes That Expand Versus Restrict Enrollment Criteria, To Test Non-Inferiority, Josh Betz, Jon Arni Steingrimsson, Tianchen Qian, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

Adaptive enrichment designs involve preplanned rules for modifying patient enrollment criteria based on data accrued in an ongoing trial. These designs may be useful when it is suspected that a subpopulation, e.g., defined by a biomarker or risk score measured at baseline, may benefit more from treatment than the complementary subpopulation. We compare two types of such designs, for the case of two subpopulations that partition the overall population. The first type starts by enrolling the subpopulation where it is suspected the new treatment is most likely to work, and then may expand inclusion criteria if there is early evidence …


Estimating Autoantibody Signatures To Detect Autoimmune Disease Patient Subsets, Zhenke Wu, Livia Casciola-Rosen, Ami A. Shah, Antony Rosen, Scott L. Zeger Apr 2017

Estimating Autoantibody Signatures To Detect Autoimmune Disease Patient Subsets, Zhenke Wu, Livia Casciola-Rosen, Ami A. Shah, Antony Rosen, Scott L. Zeger

Johns Hopkins University, Dept. of Biostatistics Working Papers

Autoimmune diseases are characterized by highly specific immune responses against molecules in self-tissues. Different autoimmune diseases are characterized by distinct immune responses, making autoantibodies useful for diagnosis and prediction. In many diseases, the targets of autoantibodies are incompletely defined. Although the technologies for autoantibody discovery have advanced dramatically over the past decade, each of these techniques generates hundreds of possibilities, which are onerous and expensive to validate. We set out to establish a method to greatly simplify autoantibody discovery, using a pre-filtering step to define subgroups with similar specificities based on migration of labeled, immunoprecipitated proteins on sodium dodecyl sulfate …


It's All About Balance: Propensity Score Matching In The Context Of Complex Survey Data, David Lenis, Trang Q. ;Nguyen, Nian Dong, Elizabeth A. Stuart Feb 2017

It's All About Balance: Propensity Score Matching In The Context Of Complex Survey Data, David Lenis, Trang Q. ;Nguyen, Nian Dong, Elizabeth A. Stuart

Johns Hopkins University, Dept. of Biostatistics Working Papers

Many research studies aim to draw causal inferences using data from large, nationally representative survey samples, and many of these studies use propensity score matching to make those causal inferences as rigorous as possible given the non-experimental nature of the data. However, very few applied studies are careful about incorporating the survey design with the propensity score analysis, which may mean that the results don’t generate population inferences. This may be because few methodological studies examine how to best combine these methods. Furthermore, even fewer of the methodological studies incorporate different non-response mechanisms in their analysis. This study examines methods …


Improving Power In Group Sequential, Randomized Trials By Adjusting For Prognostic Baseline Variables And Short-Term Outcomes, Tianchen Qian, Michael Rosenblum, Huitong Qiu Dec 2016

Improving Power In Group Sequential, Randomized Trials By Adjusting For Prognostic Baseline Variables And Short-Term Outcomes, Tianchen Qian, Michael Rosenblum, Huitong Qiu

Johns Hopkins University, Dept. of Biostatistics Working Papers

In group sequential designs, adjusting for baseline variables and short-term outcomes can lead to increased power and reduced sample size. We derive formulas for the precision gain from such variable adjustment using semiparametric estimators for the average treatment effect, and give new results on what conditions lead to substantial power gains and sample size reductions. The formulas reveal how the impact of prognostic variables on the precision gain is modified by the number of pipeline participants, analysis timing, enrollment rate, and treatment effect heterogeneity, when the semiparametric estimator uses correctly specified models. Given set prognostic value of baseline variables and …


Stochastic Optimization Of Adaptive Enrichment Designs For Two Subpopulations, Aaron Fisher, Michael Rosenblum Dec 2016

Stochastic Optimization Of Adaptive Enrichment Designs For Two Subpopulations, Aaron Fisher, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

An adaptive enrichment design is a randomized trial that allows enrollment criteria to be modified at interim analyses, based on a preset decision rule. When there is prior uncertainty regarding treatment effect heterogeneity, these trial designs can provide improved power for detecting treatment effects in subpopulations. We present a simulated annealing approach to search over the space of decision rules and other parameters for an adaptive enrichment design. The goal is to minimize the expected number enrolled or expected duration, while preserving the appropriate power and Type I error rate. We also explore the benefits of parallel computation in the …


Censoring Unbiased Regression Trees And Ensembles, Jon Arni Steingrimsson, Liqun Diao, Robert L. Strawderman Oct 2016

Censoring Unbiased Regression Trees And Ensembles, Jon Arni Steingrimsson, Liqun Diao, Robert L. Strawderman

Johns Hopkins University, Dept. of Biostatistics Working Papers

This paper proposes a novel approach to building regression trees and ensemble learning in survival analysis. By first extending the theory of censoring unbiased transformations, we construct observed data estimators of full data loss functions in cases where responses can be right censored. This theory is used to construct two specific classes of methods for building regression trees and regression ensembles that respectively make use of Buckley-James and doubly robust estimating equations for a given full data risk function. For the particular case of squared error loss, we further show how to implement these algorithms using existing software (e.g., CART, …


Matching The Efficiency Gains Of The Logistic Regression Estimator While Avoiding Its Interpretability Problems, In Randomized Trials, Michael Rosenblum, Jon Arni Steingrimsson Oct 2016

Matching The Efficiency Gains Of The Logistic Regression Estimator While Avoiding Its Interpretability Problems, In Randomized Trials, Michael Rosenblum, Jon Arni Steingrimsson

Johns Hopkins University, Dept. of Biostatistics Working Papers

Adjusting for prognostic baseline variables can lead to improved power in randomized trials. For binary outcomes, a logistic regression estimator is commonly used for such adjustment. This has resulted in substantial efficiency gains in practice, e.g., gains equivalent to reducing the required sample size by 20-28% were observed in a recent survey of traumatic brain injury trials. Robinson and Jewell (1991) proved that the logistic regression estimator is guaranteed to have equal or better asymptotic efficiency compared to the unadjusted estimator (which ignores baseline variables). Unfortunately, the logistic regression estimator has the following dangerous vulnerabilities: it is only interpretable when …


Improving Precision By Adjusting For Baseline Variables In Randomized Trials With Binary Outcomes, Without Regression Model Assumptions, Jon Arni Steingrimsson, Daniel F. Hanley, Michael Rosenblum Aug 2016

Improving Precision By Adjusting For Baseline Variables In Randomized Trials With Binary Outcomes, Without Regression Model Assumptions, Jon Arni Steingrimsson, Daniel F. Hanley, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

In randomized clinical trials with baseline variables that are prognostic for the primary outcome, there is potential to improve precision and reduce sample size by appropriately adjusting for these variables. A major challenge is that there are multiple statistical methods to adjust for baseline variables, but little guidance on which is best to use in a given context. The choice of method can have important consequences. For example, one commonly used method leads to uninterpretable estimates if there is any treatment effect heterogeneity, which would jeopardize the validity of trial conclusions. We give practical guidance on how to avoid this …


Sensitivity Of Trial Performance To Delay Outcomes, Accrual Rates, And Prognostic Variables Based On A Simulated Randomized Trial With Adaptive Enrichment, Tiachen Qian, Elizabeth Colantuoni, Aaron Fisher, Michael Rosenblum Aug 2016

Sensitivity Of Trial Performance To Delay Outcomes, Accrual Rates, And Prognostic Variables Based On A Simulated Randomized Trial With Adaptive Enrichment, Tiachen Qian, Elizabeth Colantuoni, Aaron Fisher, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

Adaptive enrichment designs involve rules for restricting enrollment to a subset of the population during the course of an ongoing trial. This can be used to target those who benefit from the experimental treatment. To leverage prognostic information in baseline variables and short-term outcomes, we use a semiparametric, locally efficient estimator, and investigate its strengths and limitations compared to standard estimators. Through simulation studies, we assess how sensitive the trial performance (Type I error, power, expected sample size, trial duration) is to different design characteristics. Our simulation distributions mimic features of data from the Alzheimer’s Disease Neuroimaging Initiative, and involve …


Inequality In Treatment Benefits: Can We Determine If A New Treatment Benefits The Many Or The Few?, Emily Huang, Ethan Fang, Daniel Hanley, Michael Rosenblum Dec 2015

Inequality In Treatment Benefits: Can We Determine If A New Treatment Benefits The Many Or The Few?, Emily Huang, Ethan Fang, Daniel Hanley, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

The primary analysis in many randomized controlled trials focuses on the average treatment effect and does not address whether treatment benefits are widespread or limited to a select few. This problem affects many disease areas, since it stems from how randomized trials, often the gold standard for evaluating treatments, are designed and analyzed. Our goal is to learn about the fraction who benefit from a treatment, based on randomized trial data. We consider the case where the outcome is ordinal, with binary outcomes as a special case. In general, the fraction who benefit is a non-identifiable parameter, and the best …


Adaptive Enrichment Designs For Randomized Trials With Delayed Endpoints, Using Locally Efficient Estimators To Improve Precision, Michael Rosenblum, Tianchen Qian, Yu Du, Huitong Qiu Apr 2015

Adaptive Enrichment Designs For Randomized Trials With Delayed Endpoints, Using Locally Efficient Estimators To Improve Precision, Michael Rosenblum, Tianchen Qian, Yu Du, Huitong Qiu

Johns Hopkins University, Dept. of Biostatistics Working Papers

Adaptive enrichment designs involve preplanned rules for modifying enrollment criteria based on accrued data in an ongoing trial. For example, enrollment of a subpopulation where there is sufficient evidence of treatment efficacy, futility, or harm could be stopped, while enrollment for the remaining subpopulations is continued. Most existing methods for constructing adaptive enrichment designs are limited to situations where patient outcomes are observed soon after enrollment. This is a major barrier to the use of such designs in practice, since for many diseases the outcome of most clinical importance does not occur shortly after enrollment. We propose a new class …


Cross-Design Synthesis For Extending The Applicability Of Trial Evidence When Treatment Effect Is Heterogeneous-I. Methodology, Ravi Varadhan, Carlos Weiss Nov 2014

Cross-Design Synthesis For Extending The Applicability Of Trial Evidence When Treatment Effect Is Heterogeneous-I. Methodology, Ravi Varadhan, Carlos Weiss

Johns Hopkins University, Dept. of Biostatistics Working Papers

Randomized controlled trials (RCTs) provide reliable evidence for approval of new treatments, informing clinical practice, and coverage decisions. The participants in RCTs are often not a representative sample of the larger at-risk population. Hence it is argued that the average treatment effect from the trial is not generalizable to the larger at-risk population. An essential premise of this argument is that there is significant heterogeneity in the treatment effect (HTE). We present a new method to extrapolate the treatment effect from a trial to a target group that is inadequately represented in the trial, when HTE is present. Our method …


Interadapt -- An Interactive Tool For Designing And Evaluating Randomized Trials With Adaptive Enrollment Criteria, Aaron Joel Fisher, Harris Jaffee, Michael Rosenblum Jun 2014

Interadapt -- An Interactive Tool For Designing And Evaluating Randomized Trials With Adaptive Enrollment Criteria, Aaron Joel Fisher, Harris Jaffee, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

The interAdapt R package is designed to be used by statisticians and clinical investigators to plan randomized trials. It can be used to determine if certain adaptive designs offer tangible benefits compared to standard designs, in the context of investigators’ specific trial goals and constraints. Specifically, interAdapt compares the performance of trial designs with adaptive enrollment criteria versus standard (non-adaptive) group sequential trial designs. Performance is compared in terms of power, expected trial duration, and expected sample size. Users can either work directly in the R console, or with a user-friendly shiny application that requires no programming experience. Several added …


Targeted Maximum Likelihood Estimation Using Exponential Families, Iván Díaz, Michael Rosenblum Jun 2014

Targeted Maximum Likelihood Estimation Using Exponential Families, Iván Díaz, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

Targeted maximum likelihood estimation (TMLE) is a general method for estimating parameters in semiparametric and nonparametric models. Each iteration of TMLE involves fitting a parametric submodel that targets the parameter of interest. We investigate the use of exponential families to define the parametric submodel. This implementation of TMLE gives a general approach for estimating any smooth parameter in the nonparametric model. A computational advantage of this approach is that each iteration of TMLE involves estimation of a parameter in an exponential family, which is a convex optimization problem for which software implementing reliable and computationally efficient methods exists. We illustrate …


Adaptive Randomized Trial Designs That Cannot Be Dominated By Any Standard Design At The Same Total Sample Size, Michael Rosenblum Jan 2014

Adaptive Randomized Trial Designs That Cannot Be Dominated By Any Standard Design At The Same Total Sample Size, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

Prior work has shown that certain types of adaptive designs can always be dominated by a suitably chosen, standard, group sequential design. This applies to adaptive designs with rules for modifying the total sample size. A natural question is whether analogous results hold for other types of adaptive designs. We focus on adaptive enrichment designs, which involve preplanned rules for modifying enrollment criteria based on accrued data in a randomized trial. Such designs often involve multiple hypotheses, e.g., one for the total population and one for a predefined subpopulation, such as those with high disease severity at baseline. We fix …


Joint Estimation Of Multiple Graphical Models From High Dimensional Time Series, Huitong Qiu, Fang Han, Han Liu, Brian Caffo Nov 2013

Joint Estimation Of Multiple Graphical Models From High Dimensional Time Series, Huitong Qiu, Fang Han, Han Liu, Brian Caffo

Johns Hopkins University, Dept. of Biostatistics Working Papers

In this manuscript the problem of jointly estimating multiple graphical models in high dimensions is considered. It is assumed that the data are collected from n subjects, each of which consists of m non-independent observations. The graphical models of subjects vary, but are assumed to change smoothly corresponding to a measure of the closeness between subjects. A kernel based method for jointly estimating all graphical models is proposed. Theoretically, under a double asymptotic framework, where both (m,n) and the dimension d can increase, the explicit rate of convergence in parameter estimation is provided, thus characterizing the strength one can borrow …


Fast Covariance Estimation For High-Dimensional Functional Data, Luo Xiao, David Ruppert, Vadim Zipunnikov, Ciprian Crainiceanu Jun 2013

Fast Covariance Estimation For High-Dimensional Functional Data, Luo Xiao, David Ruppert, Vadim Zipunnikov, Ciprian Crainiceanu

Johns Hopkins University, Dept. of Biostatistics Working Papers

For smoothing covariance functions, we propose two fast algorithms that scale linearly with the number of observations per function. Most available methods and software cannot smooth covariance matrices of dimension J x J with J>500; the recently introduced sandwich smoother is an exception, but it is not adapted to smooth covariance matrices of large dimensions such as J \ge 10,000. Covariance matrices of order J=10,000, and even J=100,000$ are becoming increasingly common, e.g., in 2- and 3-dimensional medical imaging and high-density wearable sensor data. We introduce two new algorithms that can handle very large covariance matrices: 1) FACE: a …


Trial Designs That Simultaneously Optimize The Population Enrolled And The Treatment Allocation Probabilities, Brandon S. Luber, Michael Rosenblum, Antoine Chambaz Jun 2013

Trial Designs That Simultaneously Optimize The Population Enrolled And The Treatment Allocation Probabilities, Brandon S. Luber, Michael Rosenblum, Antoine Chambaz

Johns Hopkins University, Dept. of Biostatistics Working Papers

Standard randomized trials may have lower than desired power when the treatment effect is only strong in certain subpopulations. This may occur, for example, in populations with varying disease severities or when subpopulations carry distinct biomarkers and only those who are biomarker positive respond to treatment. To address such situations, we develop a new trial design that combines two types of preplanned rules for updating how the trial is conducted based on data accrued during the trial. The aim is a design with greater overall power and that can better determine subpopulation specific treatment effects, while maintaining strong control of …


Optimal Tests Of Treatment Effects For The Overall Population And Two Subpopulations In Randomized Trials, Using Sparse Linear Programming, Michael Rosenblum, Han Liu, En-Hsu Yen May 2013

Optimal Tests Of Treatment Effects For The Overall Population And Two Subpopulations In Randomized Trials, Using Sparse Linear Programming, Michael Rosenblum, Han Liu, En-Hsu Yen

Johns Hopkins University, Dept. of Biostatistics Working Papers

We propose new, optimal methods for analyzing randomized trials, when it is suspected that treatment effects may differ in two predefined subpopulations. Such sub-populations could be defined by a biomarker or risk factor measured at baseline. The goal is to simultaneously learn which subpopulations benefit from an experimental treatment, while providing strong control of the familywise Type I error rate. We formalize this as a multiple testing problem and show it is computationally infeasible to solve using existing techniques. Our solution involves a novel approach, in which we first transform the original multiple testing problem into a large, sparse linear …


Confidence Intervals For The Selected Population In Randomized Trials That Adapt The Population Enrolled, Michael Rosenblum May 2012

Confidence Intervals For The Selected Population In Randomized Trials That Adapt The Population Enrolled, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

It is a challenge to design randomized trials when it is suspected that a treatment may benefit only certain subsets of the target population. In such situations, trial designs have been proposed that modify the population enrolled based on an interim analysis, in a preplanned manner. For example, if there is early evidence that the treatment only benefits a certain subset of the population, enrollment may then be restricted to this subset. At the end of such a trial, it is desirable to draw inferences about the selected population. We focus on constructing confidence intervals for the average treatment effect …


Longitudinal High-Dimensional Data Analysis, Vadim Zipunnikov, Sonja Greven, Brian Caffo, Daniel S. Reich, Ciprian Crainiceanu Nov 2011

Longitudinal High-Dimensional Data Analysis, Vadim Zipunnikov, Sonja Greven, Brian Caffo, Daniel S. Reich, Ciprian Crainiceanu

Johns Hopkins University, Dept. of Biostatistics Working Papers

We develop a flexible framework for modeling high-dimensional functional and imaging data observed longitudinally. The approach decomposes the observed variability of high-dimensional observations measured at multiple visits into three additive components: a subject-specific functional random intercept that quantifies the cross-sectional variability, a subject-specific functional slope that quantifies the dynamic irreversible deformation over multiple visits, and a subject-visit specific functional deviation that quantifies exchangeable or reversible visit-to-visit changes. The proposed method is very fast, scalable to studies including ultra-high dimensional data, and can easily be adapted to and executed on modest computing infrastructures. The method is applied to the longitudinal analysis …


Assessing Association For Bivariate Survival Data With Interval Sampling: A Copula Model Approach With Application To Aids Study, Hong Zhu, Mei-Cheng Wang Nov 2011

Assessing Association For Bivariate Survival Data With Interval Sampling: A Copula Model Approach With Application To Aids Study, Hong Zhu, Mei-Cheng Wang

Johns Hopkins University, Dept. of Biostatistics Working Papers

In disease surveillance systems or registries, bivariate survival data are typically collected under interval sampling. It refers to a situation when entry into a registry is at the time of the first failure event (e.g., HIV infection) within a calendar time interval, the time of the initiating event (e.g., birth) is retrospectively identified for all the cases in the registry, and subsequently the second failure event (e.g., death) is observed during the follow-up. Sampling bias is induced due to the selection process that the data are collected conditioning on the first failure event occurs within a time interval. Consequently, the …


Corrected Confidence Bands For Functional Data Using Principal Components, Jeff Goldsmith, Sonja Greven, Ciprian M. Crainiceanu Nov 2011

Corrected Confidence Bands For Functional Data Using Principal Components, Jeff Goldsmith, Sonja Greven, Ciprian M. Crainiceanu

Johns Hopkins University, Dept. of Biostatistics Working Papers

Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this paper, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- based and decomposition-based variability are constructed. Standard mixed-model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. A bootstrap procedure is implemented to understand the uncertainty in …


Component Extraction Of Complex Biomedical Signal And Performance Analysis Based On Different Algorithm, Hemant Pasusangai Kasturiwale Jun 2011

Component Extraction Of Complex Biomedical Signal And Performance Analysis Based On Different Algorithm, Hemant Pasusangai Kasturiwale

Johns Hopkins University, Dept. of Biostatistics Working Papers

Biomedical signals can arise from one or many sources including heart ,brains and endocrine systems. Multiple sources poses challenge to researchers which may have contaminated with artifacts and noise. The Biomedical time series signal are like electroencephalogram(EEG),electrocardiogram(ECG),etc The morphology of the cardiac signal is very important in most of diagnostics based on the ECG. The diagnosis of patient is based on visual observation of recorded ECG,EEG,etc, may not be accurate. To achieve better understanding , PCA (Principal Component Analysis) and ICA algorithms helps in analyzing ECG signals . The immense scope in the field of biomedical-signal processing Independent Component Analysis( …


A Broad Symmetry Criterion For Nonparametric Validity Of Parametrically-Based Tests In Randomized Trials, Russell T. Shinohara, Constantine E. Frangakis, Constantine G.. Lyketos Apr 2011

A Broad Symmetry Criterion For Nonparametric Validity Of Parametrically-Based Tests In Randomized Trials, Russell T. Shinohara, Constantine E. Frangakis, Constantine G.. Lyketos

Johns Hopkins University, Dept. of Biostatistics Working Papers

Summary. Pilot phases of a randomized clinical trial often suggest that a parametric model may be an accurate description of the trial's longitudinal trajectories. However, parametric models are often not used for fear that they may invalidate tests of null hypotheses of equality between the experimental groups. Existing work has shown that when, for some types of data, certain parametric models are used, the validity for testing the null is preserved even if the parametric models are incorrect. Here, we provide a broader and easier to check characterization of parametric models that can be used to (a) preserve nonparametric validity …