Open Access. Powered by Scholars. Published by Universities.®

Medicine and Health Sciences Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 121 - 138 of 138

Full-Text Articles in Medicine and Health Sciences

Uncertainty And The Value Of Diagnostic Information With Application To Axillary Lymph Node Dissection In Breast Cancer, Giovanni Parmigiani Dec 2003

Uncertainty And The Value Of Diagnostic Information With Application To Axillary Lymph Node Dissection In Breast Cancer, Giovanni Parmigiani

Johns Hopkins University, Dept. of Biostatistics Working Papers

In clinical decision making, it is common to ask whether, and how much, a diagnostic procedure is contributing to subsequent treatment decisions. Statistically, quantification of the value of the information provided by a diagnostic procedure can be carried out using decision trees with multiple decision points, representing both the diagnostic test and the subsequent treatments that may depend on the test's results. This article investigates probabilistic sensitivity analysis approaches for exploring and communicating parameter uncertainty in such decision trees. Complexities arise because uncertainty about a model's inputs determines uncertainty about optimal decisions at all decision nodes of a tree. We …


Survival Model Predictive Accuracy And Roc Curves, Patrick Heagerty, Yingye Zheng Dec 2003

Survival Model Predictive Accuracy And Roc Curves, Patrick Heagerty, Yingye Zheng

UW Biostatistics Working Paper Series

The predictive accuracy of a survival model can be summarized using extensions of the proportion of variation explained by the model, or R^2, commonly used for continuous response models, or using extensions of sensitivity and specificity which are commonly used for binary response models.

In this manuscript we propose new time-dependent accuracy summaries based on time-specific versions of sensitivity and specificity calculated over risk sets. We connect the accuracy summaries to a previously proposed global concordance measure which is a variant of Kendall's tau. In addition, we show how standard Cox regression output can be used to obtain estimates of …


Time-Series Studies Of Particulate Matter, Michelle L. Bell, Jonathan M. Samet, Francesca Dominici Nov 2003

Time-Series Studies Of Particulate Matter, Michelle L. Bell, Jonathan M. Samet, Francesca Dominici

Johns Hopkins University, Dept. of Biostatistics Working Papers

Studies of air pollution and human health have evolved from descriptive studies of the early phenomena of large increases in adverse health effects following extreme air pollution episodes, to time-series analyses and the development of sophisticated regression models. In fact, advanced statistical methods are necessary to address the many challenges inherent in the detection of a small pollution risk in the presence of many confounders. This paper reviews the history, methods, and findings of the time-series studies estimating health risks associated with short-term exposure to particulate matter, though much of the discussion is applicable to epidemiological studies of air pollution …


A Corrected Pseudo-Score Approach For Additive Hazards Model With Longitudinal Covariates Measured With Error, Xiao Song, Yijian Huang Nov 2003

A Corrected Pseudo-Score Approach For Additive Hazards Model With Longitudinal Covariates Measured With Error, Xiao Song, Yijian Huang

UW Biostatistics Working Paper Series

In medical studies, it is often of interest to characterize the relationship between a time-to-event and covariates, not only time-independent but also time-dependent. Time-dependent covariates are generally measured intermittently and with error. Recent interests focus on the proportional hazards framework, with longitudinal data jointly modeled through a mixed effects model. However, approaches under this framework depend on the normality assumption of the error, and might encounter intractable numerical difficulties in practice. This motivates us to consider an alternative framework, that is, the additive hazards model, under which little has been done when time-dependent covariates are measured with error. We propose …


Cross-Calibration Of Stroke Disability Measures: Bayesian Analysis Of Longitudinal Ordinal Categorical Data Using Negative Dependence, Giovanni Parmigiani, Heidi W. Ashih, Gregory P. Samsa, Pamela W. Duncan, Sue Min Lai, David B. Matchar Aug 2003

Cross-Calibration Of Stroke Disability Measures: Bayesian Analysis Of Longitudinal Ordinal Categorical Data Using Negative Dependence, Giovanni Parmigiani, Heidi W. Ashih, Gregory P. Samsa, Pamela W. Duncan, Sue Min Lai, David B. Matchar

Johns Hopkins University, Dept. of Biostatistics Working Papers

It is common to assess disability of stroke patients using standardized scales, such as the Rankin Stroke Outcome Scale (RS) and the Barthel Index (BI). The Rankin Scale, which was designed for applications to stroke, is based on assessing directly the global conditions of a patient. The Barthel Index, which was designed for general applications, is based on a series of questions about the patient’s ability to carry out 10 basis activities of daily living. As both scales are commonly used, but few studies use both, translating between scales is important in gaining an overall understanding of the efficacy of …


An Extended General Location Model For Causal Inference From Data Subject To Noncompliance And Missing Values, Yahong Peng, Rod Little, Trivellore E. Raghuanthan Aug 2003

An Extended General Location Model For Causal Inference From Data Subject To Noncompliance And Missing Values, Yahong Peng, Rod Little, Trivellore E. Raghuanthan

The University of Michigan Department of Biostatistics Working Paper Series

Noncompliance is a common problem in experiments involving randomized assignment of treatments, and standard analyses based on intention-to treat or treatment received have limitations. An attractive alternative is to estimate the Complier-Average Causal Effect (CACE), which is the average treatment effect for the subpopulation of subjects who would comply under either treatment (Angrist, Imbens and Rubin, 1996, henceforth AIR). We propose an Extended General Location Model to estimate the CACE from data with non-compliance and missing data in the outcome and in baseline covariates. Models for both continuous and categorical outcomes and ignorable and latent ignorable (Frangakis and Rubin, 1999) …


Temporal Stability And Geographic Variation In Cumulative Case Fatality Rates And Average Doubling Times Of Sars Epidemics, Alison P. Galvani, Xiudong Lei, Nicholas P. Jewell Jun 2003

Temporal Stability And Geographic Variation In Cumulative Case Fatality Rates And Average Doubling Times Of Sars Epidemics, Alison P. Galvani, Xiudong Lei, Nicholas P. Jewell

U.C. Berkeley Division of Biostatistics Working Paper Series

We analyze temporal stability and geographic trends in cumulative case fatality rates and average doubling times of severe acute respiratory syndrome (SARS). In part, we account for correlations between case fatality rates and doubling times through differences in control measures. We discuss factors that may alter future estimates of case fatality rates. We also discuss reasons for heterogeneity in doubling times among countries and the implications for the control of SARS in different countries and parameterization of epidemic models.


Identifying Target Populations For Screening Or Not Screening Using Logic Regression, Holly Janes, Margaret S. Pepe, Charles Kooperberg, Polly Newcomb May 2003

Identifying Target Populations For Screening Or Not Screening Using Logic Regression, Holly Janes, Margaret S. Pepe, Charles Kooperberg, Polly Newcomb

UW Biostatistics Working Paper Series

Colorectal cancer remains a significant public health concern despite the fact that effective screening procedures exist and that the disease is treatable when detected at early stages. Numerous risk factors for colon cancer have been identified, but none are very predictive alone. We sought to determine whether there are certain combinations of risk factors that distinguish well between cases and controls, and that could be used to identify subjects at particularly high or low risk of the disease to target screening. Using data from the Seattle site of the Colorectal Cancer Family Registry (C-CFR), we fit logic regression models to …


Improved Confidence Intervals For The Sensitivity At A Fixed Level Of Specificity Of A Continuous-Scale Diagnostic Test, Xiao-Hua Zhou, Gengsheng Qin May 2003

Improved Confidence Intervals For The Sensitivity At A Fixed Level Of Specificity Of A Continuous-Scale Diagnostic Test, Xiao-Hua Zhou, Gengsheng Qin

UW Biostatistics Working Paper Series

For a continuous-scale test, it is an interest to construct a confidence interval for the sensitivity of the diagnostic test at the cut-off that yields a predetermined level of its specificity (eg. 80%, 90%, or 95%). IN this paper we proposed two new intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity. We then conducted simulation studies to compare the relative performance of these two intervals with the best existing BCa bootstrap interval, proposed by Platt et al. (2000). Our simulation results showed that the newly proposed intervals are better than the BCa bootstrap …


A Bootstrap Confidence Interval Procedure For The Treatment Effect Using Propensity Score Subclassification, Wanzhu Tu, Xiao-Hua Zhou May 2003

A Bootstrap Confidence Interval Procedure For The Treatment Effect Using Propensity Score Subclassification, Wanzhu Tu, Xiao-Hua Zhou

UW Biostatistics Working Paper Series

In the analysis of observational studies, propensity score subclassification has been shown to be a powerful method for adjusting unbalanced covariates for the purpose of causal inferences. One practical difficulty in carrying out such an analysis is to obtain a correct variance estimate for such inferences, while reducing bias in the estimate of the treatment effect due to an imbalance in the measured covariates. In this paper, we propose a bootstrap procedure for the inferences concerning the average treatment effect; our bootstrap method is based on an extension of Efron’s bias-corrected accelerated (BCa) bootstrap confidence interval to a two-sample problem. …


Estimating The Accuracy Of Polymerase Chain Reaction-Based Tests Using Endpoint Dilution, Jim Hughes, Patricia Totten Mar 2003

Estimating The Accuracy Of Polymerase Chain Reaction-Based Tests Using Endpoint Dilution, Jim Hughes, Patricia Totten

UW Biostatistics Working Paper Series

PCR-based tests for various microorganisms or target DNA sequences are generally acknowledged to be highly "sensitive" yet the concept of sensitivity is ill-defined in the literature on these tests. We propose that sensitivity should be expressed as a function of the number of target DNA molecules in the sample (or specificity when the target number is 0). However, estimating this "sensitivity curve" is problematic since it is difficult to construct samples with a fixed number of targets. Nonetheless, using serially diluted replicate aliquots of a known concentration of the target DNA sequence, we show that it is possible to disentangle …


Analysis Of Longitudinal Marginal Structural Models , Jennifer F. Bryan, Zhuo Yu, Mark J. Van Der Laan Nov 2002

Analysis Of Longitudinal Marginal Structural Models , Jennifer F. Bryan, Zhuo Yu, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator is used as an initial estimator and the corresponding treatment-orthogonalized, one-step estimator is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. A simulation study demonstrates that the the treatment-orthogonalized, one-step estimator is superior to the IPTW estimator in terms …


An Empirical Study Of Marginal Structural Models For Time-Independent Treatment, Tanya A. Henneman, Mark J. Van Der Laan Oct 2002

An Empirical Study Of Marginal Structural Models For Time-Independent Treatment, Tanya A. Henneman, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

In non-randomized treatment studies a significant problem for statisticians is determining how best to adjust for confounders. Marginal structural models (MSMs) and inverse probability of treatment weighted (IPTW) estimators are useful in analyzing the causal effect of treatment in observational studies. Given an IPTW estimator a doubly robust augmented IPTW (AIPTW) estimator orthogonalizes it resulting in a more e±cient estimator than the IPTW estimator. One purpose of this paper is to make a practical comparison between the IPTW estimator and the doubly robust AIPTW estimator via a series of Monte- Carlo simulations. We also consider the selection of the optimal …


The Analysis Of Placement Values For Evaluating Discriminatory Measures, Margaret S. Pepe, Tianxi Cai Sep 2002

The Analysis Of Placement Values For Evaluating Discriminatory Measures, Margaret S. Pepe, Tianxi Cai

UW Biostatistics Working Paper Series

The idea of using measurements such as biomarkers, clinical data, or molecular biology assays for classification and prediction is popular in modern medicine. The scientific evaluation of such measures includes assessing the accuracy with which they predict the outcome of interest. Receiver operating characteristic curves are commonly used for evaluating the accuracy of diagnostic tests. They can be applied more broadly, indeed to any problem involving classification to two states or populations (D = 0 or D = 1). We show that the ROC curve can be interpreted as a cumulative distribution function for the discriminatory measure Y in the …


Case-Control Current Status Data, Nicholas P. Jewell, Mark J. Van Der Laan Sep 2002

Case-Control Current Status Data, Nicholas P. Jewell, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Current status observation on survival times has recently been widely studied. An extreme form of interval censoring, this data structure refers to situations where the only available information on a survival random variable, T, is whether or not T exceeds a random independent monitoring time C, a binary random variable, Y. To date, nonparametric analyses of current status data have assumed the availability of i.i.d. random samples of the random variable (Y, C), or a similar random sample at each of a set of fixed monitoring times. In many situations, it is useful to consider a case-control sampling scheme. Here, …


Current Status Data: Review, Recent Developments And Open Problems, Nicholas P. Jewell, Mark J. Van Der Laan Sep 2002

Current Status Data: Review, Recent Developments And Open Problems, Nicholas P. Jewell, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Researchers working with survival data are by now adept at handling issues associated with incomplete data, particular those associated with various forms of censoring. An extreme form of interval censoring, known as current status observation, refers to situations where the only available information on a survival random variable T is whether or not T exceeds a random independent monitoring time C. This article contains a brief review of the extensive literature on the analysis of current status data, discussing the implications of response-based sampling on these methods. The majority of the paper introduces some recent extensions of these ideas to …


Estimating Causal Parameters In Marginal Structural Models With Unmeasured Confounders Using Instrumental Variables, Tanya A. Henneman, Mark Johannes Van Der Laan, Alan E. Hubbard Jan 2002

Estimating Causal Parameters In Marginal Structural Models With Unmeasured Confounders Using Instrumental Variables, Tanya A. Henneman, Mark Johannes Van Der Laan, Alan E. Hubbard

U.C. Berkeley Division of Biostatistics Working Paper Series

For statisticians analyzing medical data, a significant problem in determining the causal effect of a treatment on a particular outcome of interest, is how to control for unmeasured confounders. Techniques using instrumental variables (IV) have been developed to estimate causal parameters in the presence of unmeasured confounders. In this paper we apply IV methods to both linear and non-linear marginal structural models. We study a specific class of generalized estimating equations that is appropriate to these data, and compare the performance of the resulting estimator to the standard IV method, a two-stage least squares procedure. Our results are applied to …


Assessing The Accuracy Of A New Diagnostic Test When A Gold Standard Does Not Exist, Todd A. Alonzo, Margaret S. Pepe Oct 1998

Assessing The Accuracy Of A New Diagnostic Test When A Gold Standard Does Not Exist, Todd A. Alonzo, Margaret S. Pepe

UW Biostatistics Working Paper Series

Often the accuracy of a new diagnostic test must be assessed when a perfect gold standard does not exist. Use of an imperfect test biases the accuracy estimates of the new test. This paper reviews existing approaches to this problem including discrepant resolution and latent class analysis. Deficiencies with these approaches are identified. A new approach is proposed that combines the results of several imperfect reference tests to define a better reference standard. We call this the composite reference standard (CRS). Using the CRS, accuracy can be assessed using multistage sampling designs. Maximum likelihood estimates of accuracy and expressions for …