Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

2007

COBRA

Discipline
Keyword
Publication

Articles 1 - 30 of 83

Full-Text Articles in Physical Sciences and Mathematics

Model-Robust Bayesian Regression And The Sandwich Estimator, Adam A. Szpiro, Kenneth M. Rice, Thomas Lumley Dec 2007

Model-Robust Bayesian Regression And The Sandwich Estimator, Adam A. Szpiro, Kenneth M. Rice, Thomas Lumley

UW Biostatistics Working Paper Series

PLEASE NOTE THAT AN UPDATED VERSION OF THIS RESEARCH IS AVAILABLE AS WORKING PAPER 338 IN THE UNIVERSITY OF WASHINGTON BIOSTATISTICS WORKING PAPER SERIES (http://www.bepress.com/uwbiostat/paper338).

In applied regression problems there is often sufficient data for accurate estimation, but standard parametric models do not accurately describe the source of the data, so associated uncertainty estimates are not reliable. We describe a simple Bayesian approach to inference in linear regression that recovers least-squares point estimates while providing correct uncertainty bounds by explicitly recognizing that standard modeling assumptions need not be valid. Our model-robust development parallels frequentist estimating equations and leads to intervals …


Spatio-Temporal Associations Between Goes Aerosol Optical Depth Retrievals And Ground-Level Pm2.5, Christopher J. Paciorek, Yang Liu, Hortensia Moreno-Macias, Shobha Kondragunta Dec 2007

Spatio-Temporal Associations Between Goes Aerosol Optical Depth Retrievals And Ground-Level Pm2.5, Christopher J. Paciorek, Yang Liu, Hortensia Moreno-Macias, Shobha Kondragunta

Harvard University Biostatistics Working Paper Series

We assess the strength of association between aerosol optical depth (AOD) retrievals from the GOES Aerosol/Smoke Product (GASP) and ground-level fine particulate matter (PM2.5) to assess AOD as a proxy for PM2.5 in the United States. GASP AOD is retrieved from a geostationary platform and therefore provides dense temporal coverage with half-hourly observations every day, in contrast to once per day snapshots from polar-orbiting satellites. However, GASP AOD is based on a less-sophisticated instrument and retrieval algorithm. We find that correlations between GASP AOD and PM2.5 over time at fixed locations are reasonably high, except in the winter and in …


Estimating Sensitivity And Specificity From A Phase 2 Biomarker Study That Allows For Early Termination, Margaret S. Pepe Phd Dec 2007

Estimating Sensitivity And Specificity From A Phase 2 Biomarker Study That Allows For Early Termination, Margaret S. Pepe Phd

UW Biostatistics Working Paper Series

Development of a disease screening biomarker involves several phases. In phase 2 its sensitivity and specificity is compared with established thresholds for minimally acceptable performance. Since we anticipate that most candidate markers will not prove to be useful and availability of specimens and funding is limited, early termination of a study is appropriate if accumulating data indicate that the marker is inadequate. Yet, for markers that complete phase 2, we seek estimates of sensitivity and specificity to proceed with the design of subsequent phase 3 studies.

We suggest early stopping criteria and estimation procedures that adjust for bias caused by …


Bayesian Analysis For Penalized Spline Regression Using Win Bugs, Ciprian M. Crainiceanu, David Ruppert, M.P. Wand Dec 2007

Bayesian Analysis For Penalized Spline Regression Using Win Bugs, Ciprian M. Crainiceanu, David Ruppert, M.P. Wand

Johns Hopkins University, Dept. of Biostatistics Working Papers

Penalized splines can be viewed as BLUPs in a mixed model framework, which allows the use of mixed model software for smoothing. Thus, software originally developed for Bayesian analysis of mixed models can be used for penalized spline regression. Bayesian inference for nonparametric models enjoys the flexibility of nonparametric models and the exact inference provided by the Bayesian inferential machinery. This paper provides a simple, yet comprehensive, set of programs for the implementation of nonparametric Bayesian analysis in WinBUGS. MCMC mixing is substantially improved from the previous versions by using low{rank thin{plate splines instead of truncated polynomial basis. Simulation time …


Longitudinal Data With Follow-Up Truncated By Death: Finding A Match Between Analysis Method And Research Aims, Brenda Kurland, Laura Lee Johnson, Paula Diehr Nov 2007

Longitudinal Data With Follow-Up Truncated By Death: Finding A Match Between Analysis Method And Research Aims, Brenda Kurland, Laura Lee Johnson, Paula Diehr

UW Biostatistics Working Paper Series

Diverse analysis approaches have been proposed to distinguish data missing due to death from nonresponse, and to summarize trajectories of longitudinal data truncated by death. We demonstrate how these analysis approaches arise from factorizations of the distribution of longitudinal data and survival information. Models are illustrated using hypothetical data examples (cognitive functioning in older adults, and quality of life under hospice care) and up to 10 annual assessments of longitudinal cognitive functioning data for 3814 participants in an observational study. For unconditional models, deaths do not occur, deaths are independent of the longitudinal response, or the unconditional longitudinal response averages …


Decomposition Of Regression Estimators To Explore The Influence Of "Unmeasured" Time-Varying Confounders, Yun Lu, Scott L. Zeger Nov 2007

Decomposition Of Regression Estimators To Explore The Influence Of "Unmeasured" Time-Varying Confounders, Yun Lu, Scott L. Zeger

Johns Hopkins University, Dept. of Biostatistics Working Papers

In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth …


A Parametric Roc Model Based Approach For Evaluating The Predictiveness Of Continuous Markers In Case-Control Studies, Ying Huang, Margaret Pepe Nov 2007

A Parametric Roc Model Based Approach For Evaluating The Predictiveness Of Continuous Markers In Case-Control Studies, Ying Huang, Margaret Pepe

UW Biostatistics Working Paper Series

The predictiveness curve shows the population distribution of risk endowed by a marker or risk prediction model. It provides a means for assessing the model's capacity for risk stratification. Methods for making inference about the predictiveness curve have been developed using cross-sectional or cohort data. Here we consider inference based on case-control studies and prior knowledge about prevalence or incidence of the outcome. We exploit the relationship between the ROC curve and the predictiveness curve given disease prevalence. Methods are developed for deriving the predictiveness curve from a parametric ROC model. Estimation of the whole range and of a portion …


Estimation Of Dose-Response Functions For Longitudinal Data, Erica E M Moodie, David A. Stephens Nov 2007

Estimation Of Dose-Response Functions For Longitudinal Data, Erica E M Moodie, David A. Stephens

COBRA Preprint Series

In a longitudinal study of dose-response, the presence of confounding or non-compliance compromises the estimation of the true effect of a treatment. Standard regression methods cannot remove the bias introduced by patient-selected treatment level, that is, they do not permit the estimation of the causal effect of dose. Using an approach based on the Generalized Propensity Score (GPS), a generalization of the classical, binary treatment propensity score, it is possible to construct a balancing score that provides a more meaningful estimation procedure for the true (unconfounded) effect of dose. Previously, the GPS has been applied only in a single interval …


Bootstrap Confidence Regions For Optimal Operating Conditions In Response Surface Methodology, Roger D. Gibb, I-Li Lu, Walter H. Carter Jr Nov 2007

Bootstrap Confidence Regions For Optimal Operating Conditions In Response Surface Methodology, Roger D. Gibb, I-Li Lu, Walter H. Carter Jr

COBRA Preprint Series

This article concerns the application of bootstrap methodology to construct a likelihood-based confidence region for operating conditions associated with the maximum of a response surface constrained to a specified region. Unlike classical methods based on the stationary point, proper interpretation of this confidence region does not depend on unknown model parameters. In addition, the methodology does not require the assumption of normally distributed errors. The approach is demonstrated for concave-down and saddle system cases in two dimensions. Simulation studies were performed to assess the coverage probability of these regions.

AMS 2000 subj Classification: 62F25, 62F40, 62F30, 62J05.

Key words: Stationary …


Loss-Based Estimation With Evolutionary Algorithms And Cross-Validation, David Shilane, Richard H. Liang, Sandrine Dudoit Nov 2007

Loss-Based Estimation With Evolutionary Algorithms And Cross-Validation, David Shilane, Richard H. Liang, Sandrine Dudoit

U.C. Berkeley Division of Biostatistics Working Paper Series

Many statistical inference methods rely upon selection procedures to estimate a parameter of the joint distribution of explanatory and outcome data, such as the regression function. Within the general framework for loss-based estimation of Dudoit and van der Laan, this project proposes an evolutionary algorithm (EA) as a procedure for risk optimization. We also analyze the size of the parameter space for polynomial regression under an interaction constraints along with constraints on either the polynomial or variable degree.


Resampling-Based Empirical Bayes Multiple Testing Procedures For Controlling Generalized Tail Probability And Expected Value Error Rates: , Sandrine Dudoit, Houston N. Gilbert, Mark J. Van Der Laan Nov 2007

Resampling-Based Empirical Bayes Multiple Testing Procedures For Controlling Generalized Tail Probability And Expected Value Error Rates: , Sandrine Dudoit, Houston N. Gilbert, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q,g) = Pr(g(Vn,Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = [g(Vn,Sn)], for arbitrary functions g(Vn,Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the …


Assessing Population Level Genetic Instability Via Moving Average, Samuel Mcdaniel, Rebecca Betensky, Tianxi Cai Nov 2007

Assessing Population Level Genetic Instability Via Moving Average, Samuel Mcdaniel, Rebecca Betensky, Tianxi Cai

Harvard University Biostatistics Working Paper Series

No abstract provided.


Identifiability And Estimation Of Causal Effects In Randomized Trials With Noncompliance And Completely Non-Ignorable Missing-Data, Hua Chen, Zhi Geng, Xiao-Hua Zhou Nov 2007

Identifiability And Estimation Of Causal Effects In Randomized Trials With Noncompliance And Completely Non-Ignorable Missing-Data, Hua Chen, Zhi Geng, Xiao-Hua Zhou

UW Biostatistics Working Paper Series

In this paper we first studied parameter identifiability in randomized clinical trials with noncompliance and missing outcomes. We showed that under certain conditions the parameters of interest were identifiable even under different types of completely non-ignorable missing data, that is, the missing mechanism depends on the outcome.We then derived their maximum likelihood (ML) and moment estimators and evaluated their finite-sample properties in simulation studies in terms of bias, efficiency and robustness. Our sensitive analysis showed the assumed non-ignorable missing- data model had an important impact on the estimated complier average causal effect (CACE) parameter. Our new method provides some new …


Nonparametric And Semiparametric Group Sequential Methods For Comparing Accuracy Of Diagnostic Tests, Liansheng Tang, Scott S. Emerson, Xiao-Hua Zhou Oct 2007

Nonparametric And Semiparametric Group Sequential Methods For Comparing Accuracy Of Diagnostic Tests, Liansheng Tang, Scott S. Emerson, Xiao-Hua Zhou

UW Biostatistics Working Paper Series

Comparison of the accuracy of two diagnostic tests using the receiver operating characteristic (ROC) curves from two diagnostic tests has been typically conducted using fixed sample designs. On the other hand, the human experimentation inherent in a comparison of diagnostic modalities argues for periodic monitoring of the accruing data to address many issues related to the ethics and efficiency of the medical study. To date, very little research has been done in the use of sequential sampling plans for comparative ROC studies, even when these studies may use expensive and unsafe diagnostic procedures. In this paper, we propose a nonparametric …


A Bayesian Image Analysis Of The Change In Tumor/Brain Contrast Uptake Induced By Radiation Via Reversible Jump Markov Chain Monte Carlo, Xiaoxi Zhang, Tim Johnson, Roderick J.A. Little Oct 2007

A Bayesian Image Analysis Of The Change In Tumor/Brain Contrast Uptake Induced By Radiation Via Reversible Jump Markov Chain Monte Carlo, Xiaoxi Zhang, Tim Johnson, Roderick J.A. Little

The University of Michigan Department of Biostatistics Working Paper Series

This work is motivated by a pilot study on the change in tumor/brain contrast uptake induced by radiation via quantitative Magnetic Resonance Imaging. The results inform the optimal timing of administering chemotherapy in the context of radiotherapy. A noticeable feature of the data is spatial heterogeneity. The tumor is physiologically and pathologically distinct from surrounding healthy tissue. Also, the tumor itself is usually highly heterogeneous. We employ a Gaussian Hidden Markov Random Field model that respects the above features. The model introduces a latent layer of discrete labels from an Markov Random Field (MRF) governed by a spatial regularization parameter. …


A Note On Targeted Maximum Likelihood And Right Censored Data, Mark J. Van Der Laan, Daniel Rubin Oct 2007

A Note On Targeted Maximum Likelihood And Right Censored Data, Mark J. Van Der Laan, Daniel Rubin

U.C. Berkeley Division of Biostatistics Working Paper Series

A popular way to estimate an unknown parameter is with substitution, or evaluating the parameter at a likelihood based fit of the data generating density. In many cases, such estimators have substantial bias and can fail to converge at the parametric rate. van der Laan and Rubin (2006) introduced targeted maximum likelihood learning, removing these shackles from substitution estimators, which were made in full agreement with the locally efficient estimating equation procedures as presented in Robins and Rotnitzsky (1992) and van der Laan and Robins (2003). This note illustrates how targeted maximum likelihood can be applied in right censored data …


Detailed Version: Analyzing Direct Effects In Randomized Trials With Secondary Interventions: An Application To Hiv Prevention Trials, Michael A. Rosenblum, Nicholas P. Jewell, Mark J. Van Der Laan, Stephen Shiboski, Ariane Van Der Straten, Nancy Padian Oct 2007

Detailed Version: Analyzing Direct Effects In Randomized Trials With Secondary Interventions: An Application To Hiv Prevention Trials, Michael A. Rosenblum, Nicholas P. Jewell, Mark J. Van Der Laan, Stephen Shiboski, Ariane Van Der Straten, Nancy Padian

U.C. Berkeley Division of Biostatistics Working Paper Series

This is the detailed technical report that accompanies the paper “Analyzing Direct Effects in Randomized Trials with Secondary Interventions: An Application to HIV Prevention Trials” (an unpublished, technical report version of which is available online at http://www.bepress.com/ucbbiostat/paper223).

The version here gives full details of the models for the time-dependent analysis, and presents further results in the data analysis section. The Methods for Improving Reproductive Health in Africa (MIRA) trial is a recently completed randomized trial that investigated the effect of diaphragm and lubricant gel use in reducing HIV infection among susceptible women. 5,045 women were randomly assigned to either the …


A Smoothing Approach To Data Masking, Yijie Zhous, Francesca Dominici, Thomas A. Louis Oct 2007

A Smoothing Approach To Data Masking, Yijie Zhous, Francesca Dominici, Thomas A. Louis

Johns Hopkins University, Dept. of Biostatistics Working Papers

Individual-level data are often not publicly available due to confidentiality. Instead, masked data are released for public use. However, analyses performed using masked data may produce invalid statistical results such as biased parameter estimates or incorrect standard errors. In this paper, we propose a data masking method using spatial smoothing, and we investigate the bias of parameter estimates resulting from analyses using the masked data for Generalized Linear Models (GLM). The method allows for varying both the form and the degree of masking by utilizing a smoothing weight function and a smoothness parameter. We show that data masking by using …


Optimal Propensity Score Stratification, Jessica A. Myers, Thomas A. Louis Oct 2007

Optimal Propensity Score Stratification, Jessica A. Myers, Thomas A. Louis

Johns Hopkins University, Dept. of Biostatistics Working Papers

Stratifying on propensity score in observational studies of treatment is a common technique used to control for bias in treatment assignment; however, there have been few studies of the relative efficiency of the various ways of forming those strata. The standard method is to use the quintiles of propensity score to create subclasses, but this choice is not based on any measure of performance either observed or theoretical. In this paper, we investigate the optimal subclassification of propensity scores for estimating treatment effect with respect to mean squared error of the estimate. We consider the optimal formation of subclasses within …


Multiple Model Evaluation Absent The Gold Standard Via Model Combination, Edwin J. Iversen, Jr., Giovanni Parmigiani, Sining Chen Oct 2007

Multiple Model Evaluation Absent The Gold Standard Via Model Combination, Edwin J. Iversen, Jr., Giovanni Parmigiani, Sining Chen

Johns Hopkins University, Dept. of Biostatistics Working Papers

We describe a method for evaluating an ensemble of predictive models given a sample of observations comprising the model predictions and the outcome event measured with error. Our formulation allows us to simultaneously estimate measurement error parameters, true outcome — aka the gold standard — and a relative weighting of the predictive scores. We describe conditions necessary to estimate the gold standard and for these estimates to be calibrated and detail how our approach is related to, but distinct from, standard model combination techniques. We apply our approach to data from a study to evaluate a collection of BRCA1/BRCA2 gene …


Analyzing Direct Effects In Randomized Trials With Secondary Interventions , Michael Rosenblum, Nicholas P. Jewell, Mark J. Van Der Laan, Stephen Shiboski, Ariane Van Der Straten, Nancy Padian Sep 2007

Analyzing Direct Effects In Randomized Trials With Secondary Interventions , Michael Rosenblum, Nicholas P. Jewell, Mark J. Van Der Laan, Stephen Shiboski, Ariane Van Der Straten, Nancy Padian

U.C. Berkeley Division of Biostatistics Working Paper Series

The Methods for Improving Reproductive Health in Africa (MIRA) trial is a recently completed randomized trial that investigated the effect of diaphragm and lubricant gel use in reducing HIV infection among susceptible women. 5,045 women were randomly assigned to either the active treatment arm or not. Additionally, all subjects in both arms received intensive condom counselling and provision, the "gold standard" HIV prevention barrier method. There was much lower reported condom use in the intervention arm than in the control arm, making it difficult to answer important public health questions based solely on the intention-to-treat analysis. We adapt an analysis …


Roc Surfaces In The Presence Of Verification Bias, Yueh-Yun Chi, Xiao-Hua (Andrew) Zhou Sep 2007

Roc Surfaces In The Presence Of Verification Bias, Yueh-Yun Chi, Xiao-Hua (Andrew) Zhou

UW Biostatistics Working Paper Series

In diagnostic medicine, the Receiver Operating Characteristic (ROC) surface is one of the established tools for assessing the accuracy of a diagnostic test in discriminating three disease states, and the volume under the ROC surface has served as a summary index for diagnostic accuracy. In practice, the selection for definitive disease examination may be based on initial test measurements, and induces verification bias in the assessment. We propose here a nonparametric likelihood-based approach to construct the empirical ROC surface in the presence of differential verification, and to estimate the volume under the ROC surface. Estimators of the standard deviation are …


Time-Dependent Performance Comparison Of Stochastic Optimization Algorithms, David Shilane, Jarno Martikainen, Seppo Ovaska Aug 2007

Time-Dependent Performance Comparison Of Stochastic Optimization Algorithms, David Shilane, Jarno Martikainen, Seppo Ovaska

U.C. Berkeley Division of Biostatistics Working Paper Series

This paper proposes a statistical methodology for comparing the performance of stochastic optimization algorithms that iteratively generate candidate optima. The fundamental data structure of the results of these algorithms is a time series. Algorithmic differences may be assessed through a procedure of statistical sampling and multiple hypothesis testing of time series data. Shilane et al. propose a general framework for performance comparison of stochastic optimization algorithms that result in a single candidate optimum. This project seeks to extend this framework to assess performance in time series data structures. The proposed methodology analyzes empirical data to determine the generation intervals in …


Comparing Trends In Cancer Rates Across Overlapping Regions, Yi Li, Ram C. Tiwari Aug 2007

Comparing Trends In Cancer Rates Across Overlapping Regions, Yi Li, Ram C. Tiwari

Harvard University Biostatistics Working Paper Series

No abstract provided.


Effective Communication Of Standard Errors And Confidence Intervals, Thomas A. Louis, Scott L. Zeger Aug 2007

Effective Communication Of Standard Errors And Confidence Intervals, Thomas A. Louis, Scott L. Zeger

Johns Hopkins University, Dept. of Biostatistics Working Papers

We recommend a format for communicating an estimate with its standard error or confidence interval. The format reinforces that the associated variability is an inseparable component of the estimate and it substantially improves clarity in tabular displays.


Correcting Instrumental Variables Estimators For Systematic Measurement Error, Stijn Vansteelandt, Manoochehr Babanezhad, Els Goetghebeur Aug 2007

Correcting Instrumental Variables Estimators For Systematic Measurement Error, Stijn Vansteelandt, Manoochehr Babanezhad, Els Goetghebeur

Harvard University Biostatistics Working Paper Series

No abstract provided.


Inference For Survival Curves With Informatively Coarsened Discrete Event-Time Data: Application To Alive, Michelle Shardell, Daniel O. Scharfstein, David Vlahov, Noya Galai Aug 2007

Inference For Survival Curves With Informatively Coarsened Discrete Event-Time Data: Application To Alive, Michelle Shardell, Daniel O. Scharfstein, David Vlahov, Noya Galai

Johns Hopkins University, Dept. of Biostatistics Working Papers

In many prospective studies, including AIDS Link to the Intravenous Experience (ALIVE), researchers are interested in comparing event-time distributions (e.g.,for human immunodeficiency virus seroconversion) between a small number of groups (e.g., risk behavior categories). However, these comparisons are complicated by participants missing visits or attending visits off schedule and seroconverting during this absence. Such data are interval-censored, or more generally,coarsened. Most analysis procedures rely on the assumption of non-informative censoring, a special case of coarsening at random that may produce biased results if not valid. Our goal is to perform inference for estimated survival functions across a small number of …


Effectively Combining Independent 2 X 2 Tables For Valid Inferences In Meta Analysis With All Available Data But No Artificial Continuity Corrections For Studies With Zero Events And Its Application To The Analysis Of Rosiglitazone's Cardiovascular Disease Related Event Data, Lu Tian, Tianxi Cai, Nikita Piankov, Pierre-Yves Cremieux, L. J. Wei Aug 2007

Effectively Combining Independent 2 X 2 Tables For Valid Inferences In Meta Analysis With All Available Data But No Artificial Continuity Corrections For Studies With Zero Events And Its Application To The Analysis Of Rosiglitazone's Cardiovascular Disease Related Event Data, Lu Tian, Tianxi Cai, Nikita Piankov, Pierre-Yves Cremieux, L. J. Wei

Harvard University Biostatistics Working Paper Series

No abstract provided.


Biomarker Discovery Using Targeted Maximum Likelihood Estimation: Application To The Treatment Of Antiretroviral Resistant Hiv Infection, Oliver Bembom, Maya L. Petersen , Soo-Yon Rhee , W. Jeffrey Fessel , Sandra E. Sinisi, Robert W. Shafer, Mark J. Van Der Laan Aug 2007

Biomarker Discovery Using Targeted Maximum Likelihood Estimation: Application To The Treatment Of Antiretroviral Resistant Hiv Infection, Oliver Bembom, Maya L. Petersen , Soo-Yon Rhee , W. Jeffrey Fessel , Sandra E. Sinisi, Robert W. Shafer, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Researchers in clinical science and bioinformatics frequently aim to learn which of a set of candidate biomarkers is important in determining a given outcome, and to rank the contributions of the candidates accordingly. This article introduces a new approach to research questions of this type, based on targeted maximum likelihood estimation of variable importance measures.

The methodology is illustrated using an example drawn from the treatment of HIV infection. Specifically, given a list of candidate mutations in the protease enzyme of HIV, we aim to discover mutations that reduce clinical virologic response to antiretroviral regimens containing the protease inhibitor lopinavir. …


A Censored Multinomial Regression Model For Perinatal Mother To Child Transmission Of Hiv, Charlotte C. Gard, Elizabeth R. Brown Jul 2007

A Censored Multinomial Regression Model For Perinatal Mother To Child Transmission Of Hiv, Charlotte C. Gard, Elizabeth R. Brown

UW Biostatistics Working Paper Series

In studies designed to estimate rates of perinatal mother to child transmission of HIV, HIV assays are scheduled at multiple points in time. Still infection status for some infants at some time points is often unknown, particularly when interim analyses are conducted. Logistic regression and Cox proportional hazards regression are commonly used to estimate covariate-adjusted transmission rates, but their methods for handling missing data may be inadequate. Here, we propose using censored multinomial regression models to estimate cumulative and conditional rates of HIV transmission. Through simulation, we show that the proposed methods perform better than standard logistic models in terms …