Open Access. Powered by Scholars. Published by Universities.®

Statistics and Probability Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 30 of 43

Full-Text Articles in Statistics and Probability

Stochastic Optimization Of Adaptive Enrichment Designs For Two Subpopulations, Aaron Fisher, Michael Rosenblum Dec 2016

Stochastic Optimization Of Adaptive Enrichment Designs For Two Subpopulations, Aaron Fisher, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

An adaptive enrichment design is a randomized trial that allows enrollment criteria to be modified at interim analyses, based on a preset decision rule. When there is prior uncertainty regarding treatment effect heterogeneity, these trial designs can provide improved power for detecting treatment effects in subpopulations. We present a simulated annealing approach to search over the space of decision rules and other parameters for an adaptive enrichment design. The goal is to minimize the expected number enrolled or expected duration, while preserving the appropriate power and Type I error rate. We also explore the benefits of parallel computation in the …


Inequality In Treatment Benefits: Can We Determine If A New Treatment Benefits The Many Or The Few?, Emily Huang, Ethan Fang, Daniel Hanley, Michael Rosenblum Dec 2015

Inequality In Treatment Benefits: Can We Determine If A New Treatment Benefits The Many Or The Few?, Emily Huang, Ethan Fang, Daniel Hanley, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

The primary analysis in many randomized controlled trials focuses on the average treatment effect and does not address whether treatment benefits are widespread or limited to a select few. This problem affects many disease areas, since it stems from how randomized trials, often the gold standard for evaluating treatments, are designed and analyzed. Our goal is to learn about the fraction who benefit from a treatment, based on randomized trial data. We consider the case where the outcome is ordinal, with binary outcomes as a special case. In general, the fraction who benefit is a non-identifiable parameter, and the best …


Interadapt -- An Interactive Tool For Designing And Evaluating Randomized Trials With Adaptive Enrollment Criteria, Aaron Joel Fisher, Harris Jaffee, Michael Rosenblum Jun 2014

Interadapt -- An Interactive Tool For Designing And Evaluating Randomized Trials With Adaptive Enrollment Criteria, Aaron Joel Fisher, Harris Jaffee, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

The interAdapt R package is designed to be used by statisticians and clinical investigators to plan randomized trials. It can be used to determine if certain adaptive designs offer tangible benefits compared to standard designs, in the context of investigators’ specific trial goals and constraints. Specifically, interAdapt compares the performance of trial designs with adaptive enrollment criteria versus standard (non-adaptive) group sequential trial designs. Performance is compared in terms of power, expected trial duration, and expected sample size. Users can either work directly in the R console, or with a user-friendly shiny application that requires no programming experience. Several added …


Targeted Maximum Likelihood Estimation Using Exponential Families, Iván Díaz, Michael Rosenblum Jun 2014

Targeted Maximum Likelihood Estimation Using Exponential Families, Iván Díaz, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

Targeted maximum likelihood estimation (TMLE) is a general method for estimating parameters in semiparametric and nonparametric models. Each iteration of TMLE involves fitting a parametric submodel that targets the parameter of interest. We investigate the use of exponential families to define the parametric submodel. This implementation of TMLE gives a general approach for estimating any smooth parameter in the nonparametric model. A computational advantage of this approach is that each iteration of TMLE involves estimation of a parameter in an exponential family, which is a convex optimization problem for which software implementing reliable and computationally efficient methods exists. We illustrate …


Adaptive Randomized Trial Designs That Cannot Be Dominated By Any Standard Design At The Same Total Sample Size, Michael Rosenblum Jan 2014

Adaptive Randomized Trial Designs That Cannot Be Dominated By Any Standard Design At The Same Total Sample Size, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

Prior work has shown that certain types of adaptive designs can always be dominated by a suitably chosen, standard, group sequential design. This applies to adaptive designs with rules for modifying the total sample size. A natural question is whether analogous results hold for other types of adaptive designs. We focus on adaptive enrichment designs, which involve preplanned rules for modifying enrollment criteria based on accrued data in a randomized trial. Such designs often involve multiple hypotheses, e.g., one for the total population and one for a predefined subpopulation, such as those with high disease severity at baseline. We fix …


Uniformly Most Powerful Tests For Simultaneously Detecting A Treatment Effect In The Overall Population And At Least One Subpopulation, Michael Rosenblum Jun 2013

Uniformly Most Powerful Tests For Simultaneously Detecting A Treatment Effect In The Overall Population And At Least One Subpopulation, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

After conducting a randomized trial, it is often of interest to determine treatment effects in the overall study population, as well as in certain subpopulations. These subpopulations could be defined by a risk factor or biomarker measured at baseline. We focus on situations where the overall population is partitioned into two predefined subpopulations. When the true average treatment effect for the overall population is positive, it logically follows that it must be positive for at least one subpopulation. We construct new multiple testing procedures that are uniformly most powerful for simultaneously rejecting the overall population null hypothesis and at least …


Trial Designs That Simultaneously Optimize The Population Enrolled And The Treatment Allocation Probabilities, Brandon S. Luber, Michael Rosenblum, Antoine Chambaz Jun 2013

Trial Designs That Simultaneously Optimize The Population Enrolled And The Treatment Allocation Probabilities, Brandon S. Luber, Michael Rosenblum, Antoine Chambaz

Johns Hopkins University, Dept. of Biostatistics Working Papers

Standard randomized trials may have lower than desired power when the treatment effect is only strong in certain subpopulations. This may occur, for example, in populations with varying disease severities or when subpopulations carry distinct biomarkers and only those who are biomarker positive respond to treatment. To address such situations, we develop a new trial design that combines two types of preplanned rules for updating how the trial is conducted based on data accrued during the trial. The aim is a design with greater overall power and that can better determine subpopulation specific treatment effects, while maintaining strong control of …


Optimal Tests Of Treatment Effects For The Overall Population And Two Subpopulations In Randomized Trials, Using Sparse Linear Programming, Michael Rosenblum, Han Liu, En-Hsu Yen May 2013

Optimal Tests Of Treatment Effects For The Overall Population And Two Subpopulations In Randomized Trials, Using Sparse Linear Programming, Michael Rosenblum, Han Liu, En-Hsu Yen

Johns Hopkins University, Dept. of Biostatistics Working Papers

We propose new, optimal methods for analyzing randomized trials, when it is suspected that treatment effects may differ in two predefined subpopulations. Such sub-populations could be defined by a biomarker or risk factor measured at baseline. The goal is to simultaneously learn which subpopulations benefit from an experimental treatment, while providing strong control of the familywise Type I error rate. We formalize this as a multiple testing problem and show it is computationally infeasible to solve using existing techniques. Our solution involves a novel approach, in which we first transform the original multiple testing problem into a large, sparse linear …


Confidence Intervals For The Selected Population In Randomized Trials That Adapt The Population Enrolled, Michael Rosenblum May 2012

Confidence Intervals For The Selected Population In Randomized Trials That Adapt The Population Enrolled, Michael Rosenblum

Johns Hopkins University, Dept. of Biostatistics Working Papers

It is a challenge to design randomized trials when it is suspected that a treatment may benefit only certain subsets of the target population. In such situations, trial designs have been proposed that modify the population enrolled based on an interim analysis, in a preplanned manner. For example, if there is early evidence that the treatment only benefits a certain subset of the population, enrollment may then be restricted to this subset. At the end of such a trial, it is desirable to draw inferences about the selected population. We focus on constructing confidence intervals for the average treatment effect …


Longitudinal High-Dimensional Data Analysis, Vadim Zipunnikov, Sonja Greven, Brian Caffo, Daniel S. Reich, Ciprian Crainiceanu Nov 2011

Longitudinal High-Dimensional Data Analysis, Vadim Zipunnikov, Sonja Greven, Brian Caffo, Daniel S. Reich, Ciprian Crainiceanu

Johns Hopkins University, Dept. of Biostatistics Working Papers

We develop a flexible framework for modeling high-dimensional functional and imaging data observed longitudinally. The approach decomposes the observed variability of high-dimensional observations measured at multiple visits into three additive components: a subject-specific functional random intercept that quantifies the cross-sectional variability, a subject-specific functional slope that quantifies the dynamic irreversible deformation over multiple visits, and a subject-visit specific functional deviation that quantifies exchangeable or reversible visit-to-visit changes. The proposed method is very fast, scalable to studies including ultra-high dimensional data, and can easily be adapted to and executed on modest computing infrastructures. The method is applied to the longitudinal analysis …


Assessing Association For Bivariate Survival Data With Interval Sampling: A Copula Model Approach With Application To Aids Study, Hong Zhu, Mei-Cheng Wang Nov 2011

Assessing Association For Bivariate Survival Data With Interval Sampling: A Copula Model Approach With Application To Aids Study, Hong Zhu, Mei-Cheng Wang

Johns Hopkins University, Dept. of Biostatistics Working Papers

In disease surveillance systems or registries, bivariate survival data are typically collected under interval sampling. It refers to a situation when entry into a registry is at the time of the first failure event (e.g., HIV infection) within a calendar time interval, the time of the initiating event (e.g., birth) is retrospectively identified for all the cases in the registry, and subsequently the second failure event (e.g., death) is observed during the follow-up. Sampling bias is induced due to the selection process that the data are collected conditioning on the first failure event occurs within a time interval. Consequently, the …


Corrected Confidence Bands For Functional Data Using Principal Components, Jeff Goldsmith, Sonja Greven, Ciprian M. Crainiceanu Nov 2011

Corrected Confidence Bands For Functional Data Using Principal Components, Jeff Goldsmith, Sonja Greven, Ciprian M. Crainiceanu

Johns Hopkins University, Dept. of Biostatistics Working Papers

Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this paper, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- based and decomposition-based variability are constructed. Standard mixed-model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. A bootstrap procedure is implemented to understand the uncertainty in …


Component Extraction Of Complex Biomedical Signal And Performance Analysis Based On Different Algorithm, Hemant Pasusangai Kasturiwale Jun 2011

Component Extraction Of Complex Biomedical Signal And Performance Analysis Based On Different Algorithm, Hemant Pasusangai Kasturiwale

Johns Hopkins University, Dept. of Biostatistics Working Papers

Biomedical signals can arise from one or many sources including heart ,brains and endocrine systems. Multiple sources poses challenge to researchers which may have contaminated with artifacts and noise. The Biomedical time series signal are like electroencephalogram(EEG),electrocardiogram(ECG),etc The morphology of the cardiac signal is very important in most of diagnostics based on the ECG. The diagnosis of patient is based on visual observation of recorded ECG,EEG,etc, may not be accurate. To achieve better understanding , PCA (Principal Component Analysis) and ICA algorithms helps in analyzing ECG signals . The immense scope in the field of biomedical-signal processing Independent Component Analysis( …


A Broad Symmetry Criterion For Nonparametric Validity Of Parametrically-Based Tests In Randomized Trials, Russell T. Shinohara, Constantine E. Frangakis, Constantine G.. Lyketos Apr 2011

A Broad Symmetry Criterion For Nonparametric Validity Of Parametrically-Based Tests In Randomized Trials, Russell T. Shinohara, Constantine E. Frangakis, Constantine G.. Lyketos

Johns Hopkins University, Dept. of Biostatistics Working Papers

Summary. Pilot phases of a randomized clinical trial often suggest that a parametric model may be an accurate description of the trial's longitudinal trajectories. However, parametric models are often not used for fear that they may invalidate tests of null hypotheses of equality between the experimental groups. Existing work has shown that when, for some types of data, certain parametric models are used, the validity for testing the null is preserved even if the parametric models are incorrect. Here, we provide a broader and easier to check characterization of parametric models that can be used to (a) preserve nonparametric validity …


Simple Examples Of Estimating Causal Effects Using Targeted Maximum Likelihood Estimation, Michael Rosenblum, Mark J. Van Der Laan Mar 2011

Simple Examples Of Estimating Causal Effects Using Targeted Maximum Likelihood Estimation, Michael Rosenblum, Mark J. Van Der Laan

Johns Hopkins University, Dept. of Biostatistics Working Papers

We present a brief overview of targeted maximum likelihood for estimating the causal effect of a single time point treatment and of a two time point treatment. We focus on simple examples demonstrating how to apply the methodology developed in (van der Laan and Rubin, 2006; Moore and van der Laan, 2007; van der Laan, 2010a,b). We include R code for the single time point case.


Functional Principal Components Model For High-Dimensional Brain Imaging, Vadim Zipunnikov, Brian S. Caffo, David M. Yousem, Christos Davatzikos, Brian S. Schwartz, Ciprian Crainiceanu Jan 2011

Functional Principal Components Model For High-Dimensional Brain Imaging, Vadim Zipunnikov, Brian S. Caffo, David M. Yousem, Christos Davatzikos, Brian S. Schwartz, Ciprian Crainiceanu

Johns Hopkins University, Dept. of Biostatistics Working Papers

We establish a fundamental equivalence between singular value decomposition (SVD) and functional principal components analysis (FPCA) models. The constructive relationship allows to deploy the numerical efficiency of SVD to fully estimate the components of FPCA, even for extremely high-dimensional functional objects, such as brain images. As an example, a functional mixed effect model is fitted to high-resolution morphometric (RAVENS) images. The main directions of morphometric variation in brain volumes are identified and discussed.


Multilevel Functional Principal Component Analysis For High-Dimensional Data, Vadim Zipunnikov, Brian Caffo, Ciprian Crainiceanu, David M. Yousem, Christos Davatzikos, Brian S. Schwartz Oct 2010

Multilevel Functional Principal Component Analysis For High-Dimensional Data, Vadim Zipunnikov, Brian Caffo, Ciprian Crainiceanu, David M. Yousem, Christos Davatzikos, Brian S. Schwartz

Johns Hopkins University, Dept. of Biostatistics Working Papers

We propose fast and scalable statistical methods for the analysis of hundreds or thousands of high dimensional vectors observed at multiple visits. The proposed inferential methods avoid the difficult task of loading the entire data set at once in the computer memory and use sequential access to data. This allows deployment of our methodology on low-resource computers where computations can be done in minutes on extremely large data sets. Our methods are motivated by and applied to a study where hundreds of subjects were scanned using Magnetic Resonance Imaging (MRI) at two visits roughly five years apart. The original data …


Longitudinal Penalized Functional Regression, Jeff Goldsmith, Ciprian M. Crainiceanu, Brian Caffo, Daniel Reich Sep 2010

Longitudinal Penalized Functional Regression, Jeff Goldsmith, Ciprian M. Crainiceanu, Brian Caffo, Daniel Reich

Johns Hopkins University, Dept. of Biostatistics Working Papers

We propose a new regression model and inferential tools for the case when both the outcome and the functional exposures are observed at multiple visits. This data structure is new but increasingly present in applications where functions or images are recorded at multiple times. This raises new inferential challenges that cannot be addressed with current methods and software. Our proposed model generalizes the Generalized Linear Mixed Effects Model (GLMM) by adding functional predictors. Smoothness of the functional coefficients is ensured using roughness penalties estimated by Restricted Maximum Likelihood (REML) in a corresponding mixed effects model. This method is computationally feasible …


Likelihood Ratio Testing For Admixture Models With Application To Genetic Linkage Analysis, Chong-Zhi Di, Kung-Yee Liang Mar 2010

Likelihood Ratio Testing For Admixture Models With Application To Genetic Linkage Analysis, Chong-Zhi Di, Kung-Yee Liang

Johns Hopkins University, Dept. of Biostatistics Working Papers

We consider likelihood ratio tests (LRT) and their modifications for homogeneity in admixture models. The admixture model is a special case of two component mixture model, where one component is indexed by an unknown parameter while the parameter value for the other component is known. It has been widely used in genetic linkage analysis under heterogeneity, in which the kernel distribution is binomial. For such models, it is long recognized that testing for homogeneity is nonstandard and the LRT statistic does not converge to a conventional 2 distribution. In this paper, we investigate the asymptotic behavior of the LRT for …


Penalized Functional Regression, Jeff Goldsmith, Jennifer Feder, Ciprian M. Crainiceanu, Brian Caffo, Daniel Reich Jan 2010

Penalized Functional Regression, Jeff Goldsmith, Jennifer Feder, Ciprian M. Crainiceanu, Brian Caffo, Daniel Reich

Johns Hopkins University, Dept. of Biostatistics Working Papers

We develop fast fitting methods for generalized functional linear models. An undersmooth of the functional predictor is obtained by projecting on a large number of smooth eigenvectors and the coefficient function is estimated using penalized spline regression. Our method can be applied to many functional data designs including functions measured with and without error, sparsely or densely sampled. The methods also extend to the case of multiple functional predictors or functional predictors with a natural multilevel structure. Our approach can be implemented using standard mixed effects software and is computationally fast. Our methodology is motivated by a diffusion tensor imaging …


Regression Adjustment And Stratification By Propensty Score In Treatment Effect Estimation, Jessica A. Myers, Thomas A. Louis Jan 2010

Regression Adjustment And Stratification By Propensty Score In Treatment Effect Estimation, Jessica A. Myers, Thomas A. Louis

Johns Hopkins University, Dept. of Biostatistics Working Papers

Propensity score adjustment of effect estimates in observational studies of treatment is a common technique used to control for bias in treatment assignment. In situations where matching on propensity score is not possible or desirable, regression adjustment and stratification are two options. Regression adjustment is used most often and can be highly efficient, but it can lead to biased results when model assumptions are violated. Validity of the stratification approach depends on fewer model assumptions, but is less efficient than regression adjustment when the regression assumptions hold. To investigate these issues, by simulation we compare stratification and regression adjustments. We …


On The Behaviour Of Marginal And Conditional Akaike Information Criteria In Linear Mixed Models, Sonja Greven, Thomas Kneib Nov 2009

On The Behaviour Of Marginal And Conditional Akaike Information Criteria In Linear Mixed Models, Sonja Greven, Thomas Kneib

Johns Hopkins University, Dept. of Biostatistics Working Papers

In linear mixed models, model selection frequently includes the selection of random effects. Two versions of the Akaike information criterion (AIC) have been used, based either on the marginal or on the conditional distribution. We show that the marginal AIC is no longer an asymptotically unbiased estimator of the Akaike information, and in fact favours smaller models without random effects. For the conditional AIC, we show that ignoring estimation uncertainty in the random effects covariance matrix, as is common practice, induces a bias that leads to the selection of any random effect not predicted to be exactly zero. We derive …


Combinational Mixtures Of Multiparameter Distributions, Valeria Edefonti, Giovanni Parmigiani Aug 2009

Combinational Mixtures Of Multiparameter Distributions, Valeria Edefonti, Giovanni Parmigiani

Johns Hopkins University, Dept. of Biostatistics Working Papers

We introduce combinatorial mixtures - a flexible class of models for inference on mixture distributions whose component have multidimensional parameters. The key idea is to allow each element of the component-specific parameter vectors to be shared by a subset of other components. This approach allows for mixtures that range from very flexible to very parsimonious, and unifies inference on component-specific parameters with inference on the number of components. We develop Bayesian inference and computation approaches for this class of distributions, and illustrate them in an application. This work was originally motivated by the analysis of cancer subtypes: in terms of …


Generalized Multilevel Functional Regression, Ciprian M. Crainiceanu, Ana-Maria Staicu, Chongzhi Di Sep 2008

Generalized Multilevel Functional Regression, Ciprian M. Crainiceanu, Ana-Maria Staicu, Chongzhi Di

Johns Hopkins University, Dept. of Biostatistics Working Papers

We introduce Generalized Multilevel Functional Linear Models (GMFLM), a novel statistical framework motivated by and applied to the Sleep Heart Health Study (SHHS), the largest community cohort study of sleep. The primary goal of SHHS is to study the association between sleep disrupted breathing (SDB) and adverse health effects. An exposure of primary interest is the sleep electroencephalogram (EEG), which was observed for thousands of individuals at two visits, roughly 5 years apart. This unique study design led to the development of models where the outcome, e.g. hypertension, is in an exponential family and the exposure, e.g. sleep EEG, is …


A Bayesian Approach To Effect Estimation Accounting For Adjustment Uncertainty, Chi Wang, Giovanni Parmigiani, Ciprian Crainiceanu, Francesca Dominici Jan 2008

A Bayesian Approach To Effect Estimation Accounting For Adjustment Uncertainty, Chi Wang, Giovanni Parmigiani, Ciprian Crainiceanu, Francesca Dominici

Johns Hopkins University, Dept. of Biostatistics Working Papers

Adjustment for confounding factors is a common goal in the analysis of both observational and controlled studies. The choice of which confounding factors should be included in the model used to estimate an effect of interest is both critical and uncertain. For this reason it is important to develop methods that estimate an effect, while accounting not only for confounders, but also for the uncertainty about which confounders should be included. In a recent article, Crainiceanu et al. (2008) have identified limitations and potential biases of Bayesian Model Averaging (BMA) (Raftery et al., 1997; Hoeting et al., 1999)when applied to …


Geostatistical Inference Under Preferential Sampling, Peter J. Diggle, Raquel Menezes, Ting-Li Su Jan 2008

Geostatistical Inference Under Preferential Sampling, Peter J. Diggle, Raquel Menezes, Ting-Li Su

Johns Hopkins University, Dept. of Biostatistics Working Papers

Geostatistics involves the fitting of spatially continuous models to spatially discrete data (Chil`es and Delfiner, 1999). Preferential sampling arises when the process that determines the data-locations and the process being modelled are stochastically dependent. Conventional geostatistical methods assume, if only implicitly, that sampling is non-preferential. However, these methods are often used in situations where sampling is likely to be preferential. For example, in mineral exploration samples may be concentrated in areas thought likely to yield high-grade ore. We give a general expression for the likelihood function of preferentially sampled geostatistical data and describe how this can be evaluated approximately using …


Optimal Propensity Score Stratification, Jessica A. Myers, Thomas A. Louis Oct 2007

Optimal Propensity Score Stratification, Jessica A. Myers, Thomas A. Louis

Johns Hopkins University, Dept. of Biostatistics Working Papers

Stratifying on propensity score in observational studies of treatment is a common technique used to control for bias in treatment assignment; however, there have been few studies of the relative efficiency of the various ways of forming those strata. The standard method is to use the quintiles of propensity score to create subclasses, but this choice is not based on any measure of performance either observed or theoretical. In this paper, we investigate the optimal subclassification of propensity scores for estimating treatment effect with respect to mean squared error of the estimate. We consider the optimal formation of subclasses within …


Multiple Model Evaluation Absent The Gold Standard Via Model Combination, Edwin J. Iversen, Jr., Giovanni Parmigiani, Sining Chen Oct 2007

Multiple Model Evaluation Absent The Gold Standard Via Model Combination, Edwin J. Iversen, Jr., Giovanni Parmigiani, Sining Chen

Johns Hopkins University, Dept. of Biostatistics Working Papers

We describe a method for evaluating an ensemble of predictive models given a sample of observations comprising the model predictions and the outcome event measured with error. Our formulation allows us to simultaneously estimate measurement error parameters, true outcome — aka the gold standard — and a relative weighting of the predictive scores. We describe conditions necessary to estimate the gold standard and for these estimates to be calibrated and detail how our approach is related to, but distinct from, standard model combination techniques. We apply our approach to data from a study to evaluate a collection of BRCA1/BRCA2 gene …


Fixed-Width Output Analysis For Markov Chain Monte Carlo, Galin L. Jones, Murali Haran, Brian S. Caffo, Ronald Neath Feb 2005

Fixed-Width Output Analysis For Markov Chain Monte Carlo, Galin L. Jones, Murali Haran, Brian S. Caffo, Ronald Neath

Johns Hopkins University, Dept. of Biostatistics Working Papers

Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a complicated target distribution via simple ergodic averages. A fundamental question in MCMC applications is when should the sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the MCMC sampling the first time the width of a confidence interval based on the ergodic averages is less than a user-specified value. Hence calculating Monte Carlo standard errors is a critical step in assessing the output of the simulation. In particular, we …


Semiparametric Regression In Capture-Recapture Modelling, O. Gimenez, C. Barbraud, Ciprian M. Crainiceanu, S. Jenouvrier, B.T. Morgan Dec 2004

Semiparametric Regression In Capture-Recapture Modelling, O. Gimenez, C. Barbraud, Ciprian M. Crainiceanu, S. Jenouvrier, B.T. Morgan

Johns Hopkins University, Dept. of Biostatistics Working Papers

Capture-recapture models were developed to estimate survival using data arising from marking and monitoring wild animals over time. Variation in the survival process may be explained by incorporating relevant covariates. We develop nonparametric and semiparametric regression models for estimating survival in capture-recapture models. A fully Bayesian approach using MCMC simulations was employed to estimate the model parameters. The work is illustrated by a study of Snow petrels, in which survival probabilities are expressed as nonlinear functions of a climate covariate, using data from a 40-year study on marked individuals, nesting at Petrels Island, Terre Adelie.