Open Access. Powered by Scholars. Published by Universities.®
Design of Experiments and Sample Surveys Commons™
Open Access. Powered by Scholars. Published by Universities.®
- Discipline
- Keyword
-
- Adaptive design; asymptotic normality; canonical distribution; clinical trial; group-sequential testing; targeted maximum likelihood methodology (1)
- Adaptive designs; Average treatment effect; Cluster randomized trials; Pair-matching; Randomized trials; Targeted minimum loss-based estimation (TMLE) (1)
- Area under the curve (1)
- Asymptotic linearity (1)
- Bernstein's inequality; central limit theorem; confidence interval; influence curve; normal distribution; survey sampling (1)
-
- Causal inference (1)
- Clustering (1)
- Counterfactual (1)
- Cross-validation (1)
- Data mining (1)
- Direct effects (1)
- Evolutionary computation; performance comparison; two-fold sampling; multiple hypothesis testing; bootstrap; numeric optimization (1)
- Influence curve (1)
- Intention-to-treat (1)
- Intervention trials (1)
- Loss-function (1)
- Randomized trials (1)
- Risk (1)
- Sample splitting (1)
- Statistical inference (1)
- Subgroup analysis (1)
- Super-learner (1)
- Targeted maximum likelihood estimation (1)
- Time-dependent confounding (1)
- V-fold cross-validation (1)
Articles 1 - 7 of 7
Full-Text Articles in Design of Experiments and Sample Surveys
Adaptive Pair-Matching In The Search Trial And Estimation Of The Intervention Effect, Laura Balzer, Maya L. Petersen, Mark J. Van Der Laan
Adaptive Pair-Matching In The Search Trial And Estimation Of The Intervention Effect, Laura Balzer, Maya L. Petersen, Mark J. Van Der Laan
U.C. Berkeley Division of Biostatistics Working Paper Series
In randomized trials, pair-matching is an intuitive design strategy to protect study validity and to potentially increase study power. In a common design, candidate units are identified, and their baseline characteristics used to create the best n/2 matched pairs. Within the resulting pairs, the intervention is randomized, and the outcomes measured at the end of follow-up. We consider this design to be adaptive, because the construction of the matched pairs depends on the baseline covariates of all candidate units. As consequence, the observed data cannot be considered as n/2 independent, identically distributed (i.i.d.) pairs of units, as current practice assumes. …
Statistical Inference For Data Adaptive Target Parameters, Mark J. Van Der Laan, Alan E. Hubbard, Sara Kherad Pajouh
Statistical Inference For Data Adaptive Target Parameters, Mark J. Van Der Laan, Alan E. Hubbard, Sara Kherad Pajouh
U.C. Berkeley Division of Biostatistics Working Paper Series
Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in estimation-sample (one of the V subsamples) and corresponding complementary parameter-generating sample that is used to generate a target parameter. For each of the V parameter-generating samples, we apply an algorithm that maps the sample in a target parameter mapping which represent the statistical target parameter generated by that parameter-generating …
Estimation And Testing In Targeted Group Sequential Covariate-Adjusted Randomized Clinical Trials, Antoine Chambaz, Mark J. Van Der Laan
Estimation And Testing In Targeted Group Sequential Covariate-Adjusted Randomized Clinical Trials, Antoine Chambaz, Mark J. Van Der Laan
U.C. Berkeley Division of Biostatistics Working Paper Series
This article is devoted to the construction and asymptotic study of adaptive group sequential covariate-adjusted randomized clinical trials analyzed through the prism of the semiparametric methodology of targeted maximum likelihood estimation (TMLE). We show how to build, as the data accrue group-sequentially, a sampling design which targets a user-supplied optimal design. We also show how to carry out a sound TMLE statistical inference based on such an adaptive sampling scheme (therefore extending some results known in the i.i.d setting only so far), and how group-sequential testing applies on top of it. The procedure is robust (i.e., consistent even if the …
Confidence Intervals For The Population Mean Tailored To Small Sample Sizes, With Applications To Survey Sampling, Michael Rosenblum, Mark J. Van Der Laan
Confidence Intervals For The Population Mean Tailored To Small Sample Sizes, With Applications To Survey Sampling, Michael Rosenblum, Mark J. Van Der Laan
U.C. Berkeley Division of Biostatistics Working Paper Series
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes.
We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability …
Detailed Version: Analyzing Direct Effects In Randomized Trials With Secondary Interventions: An Application To Hiv Prevention Trials, Michael A. Rosenblum, Nicholas P. Jewell, Mark J. Van Der Laan, Stephen Shiboski, Ariane Van Der Straten, Nancy Padian
Detailed Version: Analyzing Direct Effects In Randomized Trials With Secondary Interventions: An Application To Hiv Prevention Trials, Michael A. Rosenblum, Nicholas P. Jewell, Mark J. Van Der Laan, Stephen Shiboski, Ariane Van Der Straten, Nancy Padian
U.C. Berkeley Division of Biostatistics Working Paper Series
This is the detailed technical report that accompanies the paper “Analyzing Direct Effects in Randomized Trials with Secondary Interventions: An Application to HIV Prevention Trials” (an unpublished, technical report version of which is available online at http://www.bepress.com/ucbbiostat/paper223).
The version here gives full details of the models for the time-dependent analysis, and presents further results in the data analysis section. The Methods for Improving Reproductive Health in Africa (MIRA) trial is a recently completed randomized trial that investigated the effect of diaphragm and lubricant gel use in reducing HIV infection among susceptible women. 5,045 women were randomly assigned to either the …
Analyzing Direct Effects In Randomized Trials With Secondary Interventions , Michael Rosenblum, Nicholas P. Jewell, Mark J. Van Der Laan, Stephen Shiboski, Ariane Van Der Straten, Nancy Padian
Analyzing Direct Effects In Randomized Trials With Secondary Interventions , Michael Rosenblum, Nicholas P. Jewell, Mark J. Van Der Laan, Stephen Shiboski, Ariane Van Der Straten, Nancy Padian
U.C. Berkeley Division of Biostatistics Working Paper Series
The Methods for Improving Reproductive Health in Africa (MIRA) trial is a recently completed randomized trial that investigated the effect of diaphragm and lubricant gel use in reducing HIV infection among susceptible women. 5,045 women were randomly assigned to either the active treatment arm or not. Additionally, all subjects in both arms received intensive condom counselling and provision, the "gold standard" HIV prevention barrier method. There was much lower reported condom use in the intervention arm than in the control arm, making it difficult to answer important public health questions based solely on the intention-to-treat analysis. We adapt an analysis …
A General Framework For Statistical Performance Comparison Of Evolutionary Computation Algorithms, David Shilane, Jarno Martikainen, Sandrine Dudoit, Seppo Ovaska
A General Framework For Statistical Performance Comparison Of Evolutionary Computation Algorithms, David Shilane, Jarno Martikainen, Sandrine Dudoit, Seppo Ovaska
U.C. Berkeley Division of Biostatistics Working Paper Series
This paper proposes a statistical methodology for comparing the performance of evolutionary computation algorithms. A two-fold sampling scheme for collecting performance data is introduced, and these data are analyzed using bootstrap-based multiple hypothesis testing procedures. The proposed method is sufficiently flexible to allow the researcher to choose how performance is measured, does not rely upon distributional assumptions, and can be extended to analyze many other randomized numeric optimization routines. As a result, this approach offers a convenient, flexible, and reliable technique for comparing algorithms in a wide variety of applications.