Open Access. Powered by Scholars. Published by Universities.®
- Institution
- Keyword
-
- BCa bootstrap (1)
- Causal inference (1)
- Causal inference; EM algorithm; General location model; Missing data; Non-compliance (1)
- Cerebrovascular disease (1)
- Constrained MCMC (1)
-
- Diffusion (1)
- EM-algorithm (1)
- Endpoint dilution (1)
- Finite volume scheme (1)
- Isotonic estimation (1)
- Local odds ratios (1)
- Longitudinal contingency tables (1)
- Monotone smoothing spline (1)
- Observational study (1)
- Outcome measures (1)
- PCR (1)
- Parabolic system (1)
- Parallel computation (1)
- Phototransduction (1)
- Propensity score (1)
- Sensitivity (1)
- Signaling (1)
- Specificity (1)
- Stratification (1)
- Stroke (1)
- Totally positive dependence (1)
Articles 1 - 5 of 5
Full-Text Articles in Medicine and Health Sciences
Cross-Calibration Of Stroke Disability Measures: Bayesian Analysis Of Longitudinal Ordinal Categorical Data Using Negative Dependence, Giovanni Parmigiani, Heidi W. Ashih, Gregory P. Samsa, Pamela W. Duncan, Sue Min Lai, David B. Matchar
Cross-Calibration Of Stroke Disability Measures: Bayesian Analysis Of Longitudinal Ordinal Categorical Data Using Negative Dependence, Giovanni Parmigiani, Heidi W. Ashih, Gregory P. Samsa, Pamela W. Duncan, Sue Min Lai, David B. Matchar
Johns Hopkins University, Dept. of Biostatistics Working Papers
It is common to assess disability of stroke patients using standardized scales, such as the Rankin Stroke Outcome Scale (RS) and the Barthel Index (BI). The Rankin Scale, which was designed for applications to stroke, is based on assessing directly the global conditions of a patient. The Barthel Index, which was designed for general applications, is based on a series of questions about the patient’s ability to carry out 10 basis activities of daily living. As both scales are commonly used, but few studies use both, translating between scales is important in gaining an overall understanding of the efficacy of …
An Extended General Location Model For Causal Inference From Data Subject To Noncompliance And Missing Values, Yahong Peng, Rod Little, Trivellore E. Raghuanthan
An Extended General Location Model For Causal Inference From Data Subject To Noncompliance And Missing Values, Yahong Peng, Rod Little, Trivellore E. Raghuanthan
The University of Michigan Department of Biostatistics Working Paper Series
Noncompliance is a common problem in experiments involving randomized assignment of treatments, and standard analyses based on intention-to treat or treatment received have limitations. An attractive alternative is to estimate the Complier-Average Causal Effect (CACE), which is the average treatment effect for the subpopulation of subjects who would comply under either treatment (Angrist, Imbens and Rubin, 1996, henceforth AIR). We propose an Extended General Location Model to estimate the CACE from data with non-compliance and missing data in the outcome and in baseline covariates. Models for both continuous and categorical outcomes and ignorable and latent ignorable (Frangakis and Rubin, 1999) …
Computational Models For Diffusion Of Second Messengers In Visual Transduction, Harihar Khanal
Computational Models For Diffusion Of Second Messengers In Visual Transduction, Harihar Khanal
Publications
The process of phototransduction, whereby light is converted into an electrical response in retinal rod and cone photoreceptors, involves, as a crucial step, the diffusion of cytoplasmic signaling molecules, termed second messengers. A barrier to mathematical and computational modeling is the complex geometry of the rod outer segment which contains about 1000 thin discs. Most current investigations on the subject assume a well-stirred bulk aqueous environment thereby avoiding such geometrical complexity. We present theoretical and computational spatio-temporal models for phototransduction in vertebrate rod photoreceptors, which are pointwise in nature and thus take into account the complex geometry of the …
A Bootstrap Confidence Interval Procedure For The Treatment Effect Using Propensity Score Subclassification, Wanzhu Tu, Xiao-Hua Zhou
A Bootstrap Confidence Interval Procedure For The Treatment Effect Using Propensity Score Subclassification, Wanzhu Tu, Xiao-Hua Zhou
UW Biostatistics Working Paper Series
In the analysis of observational studies, propensity score subclassification has been shown to be a powerful method for adjusting unbalanced covariates for the purpose of causal inferences. One practical difficulty in carrying out such an analysis is to obtain a correct variance estimate for such inferences, while reducing bias in the estimate of the treatment effect due to an imbalance in the measured covariates. In this paper, we propose a bootstrap procedure for the inferences concerning the average treatment effect; our bootstrap method is based on an extension of Efron’s bias-corrected accelerated (BCa) bootstrap confidence interval to a two-sample problem. …
Estimating The Accuracy Of Polymerase Chain Reaction-Based Tests Using Endpoint Dilution, Jim Hughes, Patricia Totten
Estimating The Accuracy Of Polymerase Chain Reaction-Based Tests Using Endpoint Dilution, Jim Hughes, Patricia Totten
UW Biostatistics Working Paper Series
PCR-based tests for various microorganisms or target DNA sequences are generally acknowledged to be highly "sensitive" yet the concept of sensitivity is ill-defined in the literature on these tests. We propose that sensitivity should be expressed as a function of the number of target DNA molecules in the sample (or specificity when the target number is 0). However, estimating this "sensitivity curve" is problematic since it is difficult to construct samples with a fixed number of targets. Nonetheless, using serially diluted replicate aliquots of a known concentration of the target DNA sequence, we show that it is possible to disentangle …