Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Statistics and Probability

UW Biostatistics Working Paper Series

Specificity

Articles 1 - 9 of 9

Full-Text Articles in Physical Sciences and Mathematics

Borrowing Information Across Populations In Estimating Positive And Negative Predictive Values, Ying Huang, Youyi Fong, John Wei, Ziding Feng Oct 2012

Borrowing Information Across Populations In Estimating Positive And Negative Predictive Values, Ying Huang, Youyi Fong, John Wei, Ziding Feng

UW Biostatistics Working Paper Series

A marker's capacity to predict risk of a disease depends on disease prevalence in the target population and its classification accuracy, i.e. its ability to discriminate diseased subjects from non-diseased subjects. The latter is often considered an intrinsic property of the marker; it is independent of disease prevalence and hence more likely to be similar across populations than risk prediction measures. In this paper, we are interested in evaluating the population-specific performance of a risk prediction marker in terms of positive predictive value (PPV) and negative predictive value (NPV) at given thresholds, when samples are available from the target population …


Evaluating The Roc Performance Of Markers For Future Events, Margaret Pepe, Yingye Zheng, Yuying Jin May 2007

Evaluating The Roc Performance Of Markers For Future Events, Margaret Pepe, Yingye Zheng, Yuying Jin

UW Biostatistics Working Paper Series

Receiver operating characteristic (ROC) curves play a central role in the evaluation of biomarkers and tests for disease diagnosis. Predictors for event time outcomes can also be evaluated with ROC curves, but the time lag between marker measurement and event time must be acknowledged. We discuss different definitions of time-dependent ROC curves in the context of real applications. Several approaches have been proposed for estimation. We contrast retrospective versus prospective methods in regards to assumptions and flexibility, including their capacities to incorporate censored data, competing risks and different sampling schemes. Applications to two datasets are presented.


New Confidence Intervals For The Difference Between Two Sensitivities At A Fixed Level Of Specificity, Gengsheng Qin, Yu-Sheng Hsu, Xiao-Hua Zhou Mar 2005

New Confidence Intervals For The Difference Between Two Sensitivities At A Fixed Level Of Specificity, Gengsheng Qin, Yu-Sheng Hsu, Xiao-Hua Zhou

UW Biostatistics Working Paper Series

For two continuous-scale diagnostic tests, it is of interest to compare their sensitivities at a predetermined level of specificity. In this paper we propose three new intervals for the difference between two sensitivities at a fixed level of specificity. These intervals are easy to compute. We also conduct simulation studies to compare the relative performance of the new intervals with the existing normal approximation based interval proposed by Wieand et al (1989). Our simulation results show that the newly proposed intervals perform better than the existing normal approximation based interval in terms of coverage accuracy and interval length.


Standardizing Markers To Evaluate And Compare Their Performances, Margaret S. Pepe, Gary M. Longton Jan 2005

Standardizing Markers To Evaluate And Compare Their Performances, Margaret S. Pepe, Gary M. Longton

UW Biostatistics Working Paper Series

Introduction: Markers that purport to distinguish subjects with a condition from those without a condition must be evaluated rigorously for their classification accuracy. A single approach to statistically evaluating and comparing markers is not yet established.

Methods: We suggest a standardization that uses the marker distribution in unaffected subjects as a reference. For an affected subject with marker value Y, the standardized placement value is the proportion of unaffected subjects with marker values that exceed Y.

Results: We apply the standardization to two illustrative datasets. In patients with pancreatic cancer placement values calculated for the CA 19-9 marker are smaller …


Survival Model Predictive Accuracy And Roc Curves, Patrick Heagerty, Yingye Zheng Dec 2003

Survival Model Predictive Accuracy And Roc Curves, Patrick Heagerty, Yingye Zheng

UW Biostatistics Working Paper Series

The predictive accuracy of a survival model can be summarized using extensions of the proportion of variation explained by the model, or R^2, commonly used for continuous response models, or using extensions of sensitivity and specificity which are commonly used for binary response models.

In this manuscript we propose new time-dependent accuracy summaries based on time-specific versions of sensitivity and specificity calculated over risk sets. We connect the accuracy summaries to a previously proposed global concordance measure which is a variant of Kendall's tau. In addition, we show how standard Cox regression output can be used to obtain estimates of …


Improved Confidence Intervals For The Sensitivity At A Fixed Level Of Specificity Of A Continuous-Scale Diagnostic Test, Xiao-Hua Zhou, Gengsheng Qin May 2003

Improved Confidence Intervals For The Sensitivity At A Fixed Level Of Specificity Of A Continuous-Scale Diagnostic Test, Xiao-Hua Zhou, Gengsheng Qin

UW Biostatistics Working Paper Series

For a continuous-scale test, it is an interest to construct a confidence interval for the sensitivity of the diagnostic test at the cut-off that yields a predetermined level of its specificity (eg. 80%, 90%, or 95%). IN this paper we proposed two new intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity. We then conducted simulation studies to compare the relative performance of these two intervals with the best existing BCa bootstrap interval, proposed by Platt et al. (2000). Our simulation results showed that the newly proposed intervals are better than the BCa bootstrap …


Estimating The Accuracy Of Polymerase Chain Reaction-Based Tests Using Endpoint Dilution, Jim Hughes, Patricia Totten Mar 2003

Estimating The Accuracy Of Polymerase Chain Reaction-Based Tests Using Endpoint Dilution, Jim Hughes, Patricia Totten

UW Biostatistics Working Paper Series

PCR-based tests for various microorganisms or target DNA sequences are generally acknowledged to be highly "sensitive" yet the concept of sensitivity is ill-defined in the literature on these tests. We propose that sensitivity should be expressed as a function of the number of target DNA molecules in the sample (or specificity when the target number is 0). However, estimating this "sensitivity curve" is problematic since it is difficult to construct samples with a fixed number of targets. Nonetheless, using serially diluted replicate aliquots of a known concentration of the target DNA sequence, we show that it is possible to disentangle …


The Analysis Of Placement Values For Evaluating Discriminatory Measures, Margaret S. Pepe, Tianxi Cai Sep 2002

The Analysis Of Placement Values For Evaluating Discriminatory Measures, Margaret S. Pepe, Tianxi Cai

UW Biostatistics Working Paper Series

The idea of using measurements such as biomarkers, clinical data, or molecular biology assays for classification and prediction is popular in modern medicine. The scientific evaluation of such measures includes assessing the accuracy with which they predict the outcome of interest. Receiver operating characteristic curves are commonly used for evaluating the accuracy of diagnostic tests. They can be applied more broadly, indeed to any problem involving classification to two states or populations (D = 0 or D = 1). We show that the ROC curve can be interpreted as a cumulative distribution function for the discriminatory measure Y in the …


Assessing The Accuracy Of A New Diagnostic Test When A Gold Standard Does Not Exist, Todd A. Alonzo, Margaret S. Pepe Oct 1998

Assessing The Accuracy Of A New Diagnostic Test When A Gold Standard Does Not Exist, Todd A. Alonzo, Margaret S. Pepe

UW Biostatistics Working Paper Series

Often the accuracy of a new diagnostic test must be assessed when a perfect gold standard does not exist. Use of an imperfect test biases the accuracy estimates of the new test. This paper reviews existing approaches to this problem including discrepant resolution and latent class analysis. Deficiencies with these approaches are identified. A new approach is proposed that combines the results of several imperfect reference tests to define a better reference standard. We call this the composite reference standard (CRS). Using the CRS, accuracy can be assessed using multistage sampling designs. Maximum likelihood estimates of accuracy and expressions for …