Open Access. Powered by Scholars. Published by Universities.®

Medicine and Health Sciences Commons

Open Access. Powered by Scholars. Published by Universities.®

Epidemiology

COBRA

2005

Keyword
Publication

Articles 1 - 18 of 18

Full-Text Articles in Medicine and Health Sciences

A Hybrid Model For Reducing Ecological Bias, Ruth Salway, Jon Wakefield Dec 2005

A Hybrid Model For Reducing Ecological Bias, Ruth Salway, Jon Wakefield

UW Biostatistics Working Paper Series

A major drawback of epidemiological ecological studies, in which the association between area-level summaries of risk and exposure are used to make inference about individual risk, is the difficulty in characterising within-area variability in exposure and confounder variables. To avoid ecological bias, samples of individual exposure/confounder data within each area are required. Unfortunately these may be difficult or expensive to obtain, particularly if large samples are required. In this paper we propose a new approach suitable for use with small samples. We combine a Bayesian non-parametric Dirichlet process prior with an estimating functions approach, and show that this model gives …


Health-Exposure Modelling And The Ecological Fallacy, Jon Wakefield, Gavin Shaddick Dec 2005

Health-Exposure Modelling And The Ecological Fallacy, Jon Wakefield, Gavin Shaddick

UW Biostatistics Working Paper Series

Recently there has been increased interest in modelling the association between aggregate disease counts and environmental exposures measured, for example via air pollution monitors, at point locations. This paper has two aims: first we develop a model for such data in order to avoid ecological bias; second we illustrate that modelling the exposure surface and estimating exposures may lead to bias in estimation of health effects. Design issues are also briefly considered, in particular the loss of information in moving from individual to ecological data, and the at-risk populations to consider in relation to the pollution monitor locations. The approach …


History-Adjusted Marginal Structural Models To Estimate Time-Varying Effect Modification , Maya L. Petersen, Steven G. Deeks, Jeffrey N. Martin, Mark J. Van Der Laan Dec 2005

History-Adjusted Marginal Structural Models To Estimate Time-Varying Effect Modification , Maya L. Petersen, Steven G. Deeks, Jeffrey N. Martin, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Much of epidemiology and clinical medicine is focused on the estimation of treatments or interventions administered over time. In such settings of longitudinal treatment, time-dependent confounding is often an important source of bias. Marginal structural models are a powerful tool for estimating the causal effect of a treatment using observational data, particularly when time-dependent confounding is present. Recent statistical work presented a generalization of marginal structural models, called history-adjusted marginal structural models. Unlike standard marginal structural models, history-adjusted marginal structural models can be used to estimate modification of treatment effects by time-varying covariates. Estimation of time-dependent causal effect modification is …


Population Intervention Models In Causal Inference, Alan E. Hubbard, Mark J. Van Der Laan Oct 2005

Population Intervention Models In Causal Inference, Alan E. Hubbard, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Marginal structural models (MSM) provide a powerful tool for estimating the causal effect of a] treatment variable or risk variable on the distribution of a disease in a population. These models, as originally introduced by Robins (e.g., Robins (2000a), Robins (2000b), van der Laan and Robins (2002)), model the marginal distributions of treatment-specific counterfactual outcomes, possibly conditional on a subset of the baseline covariates, and its dependence on treatment. Marginal structural models are particularly useful in the context of longitudinal data structures, in which each subject's treatment and covariate history are measured over time, and an outcome is recorded at …


Additive Hazards Models With Latent Treatment Effectiveness Lag Time, Ying Qing Chen, Charles A. Rohde, Mei-Cheng Wang Oct 2005

Additive Hazards Models With Latent Treatment Effectiveness Lag Time, Ying Qing Chen, Charles A. Rohde, Mei-Cheng Wang

Johns Hopkins University, Dept. of Biostatistics Working Papers

In many clinical trials to evaluate treatment efficacy, it is believed that there may exist latent treatment effectiveness lag times after which medical procedure or chemical compound would be in full effect. In this article, semiparametric regression models are proposed and studied to estimate the treatment effect accounting for such latent lag times. The new models take advantage of the invariance property of the additive hazards model in marginalizing over random effects, so parameters in the models are easy to be estimated and interpreted, while the flexibility without specifying baseline hazard function is kept. Monte Carlo simulation studies demonstrate the …


Gauss-Seidel Estimation Of Generalized Linear Mixed Models With Application To Poisson Modeling Of Spatially Varying Disease Rates, Subharup Guha, Louise Ryan Oct 2005

Gauss-Seidel Estimation Of Generalized Linear Mixed Models With Application To Poisson Modeling Of Spatially Varying Disease Rates, Subharup Guha, Louise Ryan

Harvard University Biostatistics Working Paper Series

Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases.

This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM …


Computational Techniques For Spatial Logistic Regression With Large Datasets, Christopher J. Paciorek, Louise Ryan Oct 2005

Computational Techniques For Spatial Logistic Regression With Large Datasets, Christopher J. Paciorek, Louise Ryan

Harvard University Biostatistics Working Paper Series

In epidemiological work, outcomes are frequently non-normal, sample sizes may be large, and effects are often small. To relate health outcomes to geographic risk factors, fast and powerful methods for fitting spatial models, particularly for non-normal data, are required. We focus on binary outcomes, with the risk surface a smooth function of space. We compare penalized likelihood models, including the penalized quasi-likelihood (PQL) approach, and Bayesian models based on fit, speed, and ease of implementation.

A Bayesian model using a spectral basis representation of the spatial surface provides the best tradeoff of sensitivity and specificity in simulations, detecting real spatial …


Estimation And Projection Of Indicence And Prevalence Based On Doubly Truncated Data With Application To Pharmacoepidemiological Databases, Henrik Stovring, Mei-Cheng Wang Oct 2005

Estimation And Projection Of Indicence And Prevalence Based On Doubly Truncated Data With Application To Pharmacoepidemiological Databases, Henrik Stovring, Mei-Cheng Wang

Johns Hopkins University, Dept. of Biostatistics Working Papers

Incidences of disease are of primary interest in any epidemiological analysis of disease spread in general populations. Ordinary estimates obtained from follow-up of an initially non-diseased cohort are costly, and so such estimates are not routinely available. In contrast, routine registers exist for many diseases with data on all detected cases within a given calendar time period, but lacking information on non-diseased. In the present work we show how this type of data supplemented with data on the past birth process can be analyzed to yield age specific incidence estimates as well as lifetime prevalence. A non-parametric model is studied …


Estimation Of Direct Causal Effects, Maya L. Petersen, Mark J. Van Der Laan Sep 2005

Estimation Of Direct Causal Effects, Maya L. Petersen, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Many common problems in epidemiologic and clinical research involve estimating the effect of an exposure on an outcome while blocking the exposure's effect on an intermediate variable. Effects of this kind are termed direct effects. Estimation of direct effects arises frequently in research aimed at understanding mechanistic pathways by which an exposure acts to cause or prevent disease, as well as in many other settings. Although multivariable regression is commonly used to estimate direct effects, this approach requires assumptions beyond those required for the estimation of total causal effects. In addition, multivariable regression estimates a particular type of direct effect, …


A Nonstationary Negative Binomial Time Series With Time-Dependent Covariates: Enterococcus Counts In Boston Harbor, E. Andres Houseman, Brent Coull, James P. Shine Sep 2005

A Nonstationary Negative Binomial Time Series With Time-Dependent Covariates: Enterococcus Counts In Boston Harbor, E. Andres Houseman, Brent Coull, James P. Shine

Harvard University Biostatistics Working Paper Series

Boston Harbor has had a history of poor water quality, including contamination by enteric pathogens. We conduct a statistical analysis of data collected by the Massachusetts Water Resources Authority (MWRA) between 1996 and 2002 to evaluate the effects of court-mandated improvements in sewage treatment. Motivated by the ineffectiveness of standard Poisson mixture models and their zero-inflated counterparts, we propose a new negative binomial model for time series of Enterococcus counts in Boston Harbor, where nonstationarity and autocorrelation are modeled using a nonparametric smooth function of time in the predictor. Without further restrictions, this function is not identifiable in the presence …


Direct Effect Models, Mark J. Van Der Laan, Maya L. Petersen Aug 2005

Direct Effect Models, Mark J. Van Der Laan, Maya L. Petersen

U.C. Berkeley Division of Biostatistics Working Paper Series

The causal effect of a treatment on an outcome is generally mediated by several intermediate variables. Estimation of the component of the causal effect of a treatment that is mediated by a given intermediate variable (the indirect effect of the treatment), and the component that is not mediated by that intermediate variable (the direct effect of the treatment) is often relevant to mechanistic understanding and to the design of clinical and public health interventions. Under the assumption of no-unmeasured confounders for treatment and the intermediate variable, Robins & Greenland (1992) define an individual direct effect as the counterfactual effect of …


Attributable Risk Function In The Proportional Hazards Model, Ying Qing Chen, Chengcheng Hu, Yan Wang May 2005

Attributable Risk Function In The Proportional Hazards Model, Ying Qing Chen, Chengcheng Hu, Yan Wang

UW Biostatistics Working Paper Series

As an epidemiological parameter, the population attributable fraction is an important measure to quantify the public health attributable risk of an exposure to morbidity and mortality. In this article, we extend this parameter to the attributable fraction function in survival analysis of time-to-event outcomes, and further establish its estimation and inference procedures based on the widely used proportional hazards models. Numerical examples and simulations studies are presented to validate and demonstrate the proposed methods.


Causal Inference In Longitudinal Studies With History-Restricted Marginal Structural Models, Romain Neugebauer, Mark J. Van Der Laan, Ira B. Tager Apr 2005

Causal Inference In Longitudinal Studies With History-Restricted Marginal Structural Models, Romain Neugebauer, Mark J. Van Der Laan, Ira B. Tager

U.C. Berkeley Division of Biostatistics Working Paper Series

Causal Inference based on Marginal Structural Models (MSMs) is particularly attractive to subject-matter investigators because MSM parameters provide explicit representations of causal effects. We introduce History-Restricted Marginal Structural Models (HRMSMs) for longitudinal data for the purpose of defining causal parameters which may often be better suited for Public Health research. This new class of MSMs allows investigators to analyze the causal effect of a treatment on an outcome based on a fixed, shorter and user-specified history of exposure compared to MSMs. By default, the latter represents the treatment causal effect of interest based on a treatment history defined by the …


History-Adjusted Marginal Structural Models: Time-Varying Effect Modification, Maya L. Petersen, Mark J. Van Der Laan Apr 2005

History-Adjusted Marginal Structural Models: Time-Varying Effect Modification, Maya L. Petersen, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Marginal structural models (MSM) provide a powerful tool for estimating the causal effect of a treatment, particularly in the context of longitudinal data structures. These models, introduced by Robins, model the marginal distributions of treatment-specific counterfactual outcomes, possibly conditional on a subset of the baseline covariates. However, standard MSM cannot incorporate modification of treatment effects by time-varying covariates. In the context of clinical decision- making such time-varying effect modifiers are often of considerable interest, as they are used in practice to guide treatment decisions for an individual. In this article we introduce a generalization of marginal structural models, which we …


History-Adjusted Marginal Structural Models: Optimal Treatment Strategies, Maya L. Petersen, Mark J. Van Der Laan Apr 2005

History-Adjusted Marginal Structural Models: Optimal Treatment Strategies, Maya L. Petersen, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Much of clinical medicine involves choosing a future treatment plan that is expected to optimize a patient's long-term outcome, and modifying this treatment plan over time in response to changes in patient characteristics. However, dynamic treatment regimens, or decision rules for altering treatment in response to time-varying covariates, are rarely estimated based on observational data. In a companion paper, we introduced a generalization of Marginal Structural Models, named History-Adjusted Marginal Structural Models, that estimate modification of causal effects by time-varying covariates. Here, we illustrate how History-Adjusted Marginal Structural Models can be used to identify a specific type of optimal dynamic …


Insights Into Latent Class Analysis, Margaret S. Pepe, Holly Janes Jan 2005

Insights Into Latent Class Analysis, Margaret S. Pepe, Holly Janes

UW Biostatistics Working Paper Series

Latent class analysis is a popular statistical technique for estimating disease prevalence and test sensitivity and specificity. It is used when a gold standard assessment of disease is not available but results of multiple imperfect tests are. We derive analytic expressions for the parameter estimates in terms of the raw data, under the conditional independence assumption. These expressions indicate explicitly how observed two- and three-way associations between test results are used to infer disease prevalence and test operating characteristics. Although reasonable if the conditional independence model holds, the estimators have no basis when it fails. We therefore caution against using …


Standardizing Markers To Evaluate And Compare Their Performances, Margaret S. Pepe, Gary M. Longton Jan 2005

Standardizing Markers To Evaluate And Compare Their Performances, Margaret S. Pepe, Gary M. Longton

UW Biostatistics Working Paper Series

Introduction: Markers that purport to distinguish subjects with a condition from those without a condition must be evaluated rigorously for their classification accuracy. A single approach to statistically evaluating and comparing markers is not yet established.

Methods: We suggest a standardization that uses the marker distribution in unaffected subjects as a reference. For an affected subject with marker value Y, the standardized placement value is the proportion of unaffected subjects with marker values that exceed Y.

Results: We apply the standardization to two illustrative datasets. In patients with pancreatic cancer placement values calculated for the CA 19-9 marker are smaller …


Combining Predictors For Classification Using The Area Under The Roc Curve, Margaret S. Pepe, Tianxi Cai, Zheng Zhang, Gary M. Longton Jan 2005

Combining Predictors For Classification Using The Area Under The Roc Curve, Margaret S. Pepe, Tianxi Cai, Zheng Zhang, Gary M. Longton

UW Biostatistics Working Paper Series

No single biomarker for cancer is considered adequately sensitive and specific for cancer screening. It is expected that the results of multiple markers will need to be combined in order to yield adequately accurate classification. Typically the objective function that is optimized for combining markers is the likelihood function. In this paper we consider an alternative objective function -- the area under the empirical receiver operating characteristic curve (AUC). We note that it yields consistent estimates of parameters in a generalized linear model for the risk score but does not require specifying the link function. Like logistic regression it yields …