Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

2004

Statistics and Probability

Institution
Keyword
Publication
Publication Type
File Type

Articles 1 - 30 of 274

Full-Text Articles in Physical Sciences and Mathematics

Analysis Of Oligonucleotide Array Experiments With Repeated Measures Using Mixed Models, Hao Li, Constance L. Wood, Thomas V. Getchell, Marilyn L. Getchell, Arnold J. Stromberg Dec 2004

Analysis Of Oligonucleotide Array Experiments With Repeated Measures Using Mixed Models, Hao Li, Constance L. Wood, Thomas V. Getchell, Marilyn L. Getchell, Arnold J. Stromberg

Statistics Faculty Publications

BACKGROUND: Two or more factor mixed factorial experiments are becoming increasingly common in microarray data analysis. In this case study, the two factors are presence (Patients with Alzheimer's disease) or absence (Control) of the disease, and brain regions including olfactory bulb (OB) or cerebellum (CER). In the design considered in this manuscript, OB and CER are repeated measurements from the same subject and, hence, are correlated. It is critical to identify sources of variability in the analysis of oligonucleotide array experiments with repeated measures and correlations among data points have to be considered. In addition, multiple testing problems are more …


Estimating Percentile-Specific Causal Effects: A Case Study Of Micronutrient Supplementation, Birth Weight, And Infant Mortality, Francesca Dominici, Scott L. Zeger, Giovanni Parmigiani, Joanne Katz, Parul Christian Dec 2004

Estimating Percentile-Specific Causal Effects: A Case Study Of Micronutrient Supplementation, Birth Weight, And Infant Mortality, Francesca Dominici, Scott L. Zeger, Giovanni Parmigiani, Joanne Katz, Parul Christian

Johns Hopkins University, Dept. of Biostatistics Working Papers

In developing countries, higher infant mortality is partially caused by poor maternal and fetal nutrition. Clinical trials of micronutrient supplementation are aimed at reducing the risk of infant mortality by increasing birth weight. Because infant mortality is greatest among the low birth weight infants (LBW) (• 2500 grams), an effective intervention may need to increase the birth weight among the smallest babies. Although it has been demonstrated that supplementation increases the birth weight in a trial conducted in Nepal, there is inconclusive evidence that the supplementation improves their survival. It has been hypothesized that a potential benefit of the treatment …


A Hybrid Newton-Type Method For The Linear Regression In Case-Cohort Studies, Menggang Yu, Bin Nan Dec 2004

A Hybrid Newton-Type Method For The Linear Regression In Case-Cohort Studies, Menggang Yu, Bin Nan

The University of Michigan Department of Biostatistics Working Paper Series

Case-cohort designs are increasingly commonly used in large epidemiological cohort studies. Nan, Yu, and Kalbeisch (2004) provided the asymptotic results for censored linear regression models in case-cohort studies. In this article, we consider computational aspects of their proposed rank based estimating methods. We show that the rank based discontinuous estimating functions for case-cohort studies are monotone, a property established for cohort data in the literature, when generalized Gehan type of weights are used. Though the estimating problem can be formulated to a linear programming problem as that for cohort data, due to its easily uncontrollable large scale even for a …


Multiple Testing Procedures For Controlling Tail Probability Error Rates, Sandrine Dudoit, Mark J. Van Der Laan, Merrill D. Birkner Dec 2004

Multiple Testing Procedures For Controlling Tail Probability Error Rates, Sandrine Dudoit, Mark J. Van Der Laan, Merrill D. Birkner

U.C. Berkeley Division of Biostatistics Working Paper Series

The present article discusses and compares multiple testing procedures (MTP) for controlling Type I error rates defined as tail probabilities for the number (gFWER) and proportion (TPPFP) of false positives among the rejected hypotheses. Specifically, we consider the gFWER- and TPPFP-controlling MTPs proposed recently by Lehmann & Romano (2004) and in a series of four articles by Dudoit et al. (2004), van der Laan et al. (2004b,a), and Pollard & van der Laan (2004). The former Lehmann & Romano (2004) procedures are marginal, in the sense that they are based solely on the marginal distributions of the test statistics, i.e., …


Ranking Usrds Provider-Specific Smrs From 1998-2001, Rongheng Lin, Thomas A. Louis, Susan M. Paddock, Greg Ridgeway Dec 2004

Ranking Usrds Provider-Specific Smrs From 1998-2001, Rongheng Lin, Thomas A. Louis, Susan M. Paddock, Greg Ridgeway

Johns Hopkins University, Dept. of Biostatistics Working Papers

Provider profiling (ranking, "league tables") is prevalent in health services research. Similarly, comparing educational institutions and identifying differentially expressed genes depend on ranking. Effective ranking procedures must be structured by a hierarchical (Bayesian) model and guided by a ranking-specific loss function, however even optimal methods can perform poorly and estimates must be accompanied by uncertainty assessments. We use the 1998-2001 Standardized Mortality Ratio (SMR) data from United States Renal Data System (USRDS) as a platform to identify issues and approaches. Our analyses extend Liu et al. (2004) by combining evidence over multiple years via an AR(1) model; by considering estimates …


Referent Selection Strategies In Case-Crossover Analyses Of Air Pollution Exposure Data: Implications For Bias, Holly Janes, Lianne Sheppard, Thomas Lumley Dec 2004

Referent Selection Strategies In Case-Crossover Analyses Of Air Pollution Exposure Data: Implications For Bias, Holly Janes, Lianne Sheppard, Thomas Lumley

UW Biostatistics Working Paper Series

The case-crossover design has been widely used to study the association between short term air pollution exposure and the risk of an acute adverse health event. The design uses cases only, and, for each individual, compares exposure just prior to the event with exposure at other control, or “referent” times. By making within-subject comparisons, time invariant confounders are controlled by design. Even more important in the air pollution setting is that, by matching referents to the index time, time varying confounders can also be controlled by design. Yet, the referent selection strategy is important for reasons other than control of …


Multiple Testing Procedures: R Multtest Package And Applications To Genomics, Katherine S. Pollard, Sandrine Dudoit, Mark J. Van Der Laan Dec 2004

Multiple Testing Procedures: R Multtest Package And Applications To Genomics, Katherine S. Pollard, Sandrine Dudoit, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

The Bioconductor R package multtest implements widely applicable resampling-based single-step and stepwise multiple testing procedures (MTP) for controlling a broad class of Type I error rates, in testing problems involving general data generating distributions (with arbitrary dependence structures among variables), null hypotheses, and test statistics. The current version of multtest provides MTPs for tests concerning means, differences in means, and regression parameters in linear and Cox proportional hazards models. Procedures are provided to control Type I error rates defined as tail probabilities for arbitrary functions of the numbers of false positives and rejected hypotheses. These error rates include tail probabilities …


Semiparametric Regression In Capture-Recapture Modelling, O. Gimenez, C. Barbraud, Ciprian M. Crainiceanu, S. Jenouvrier, B.T. Morgan Dec 2004

Semiparametric Regression In Capture-Recapture Modelling, O. Gimenez, C. Barbraud, Ciprian M. Crainiceanu, S. Jenouvrier, B.T. Morgan

Johns Hopkins University, Dept. of Biostatistics Working Papers

Capture-recapture models were developed to estimate survival using data arising from marking and monitoring wild animals over time. Variation in the survival process may be explained by incorporating relevant covariates. We develop nonparametric and semiparametric regression models for estimating survival in capture-recapture models. A fully Bayesian approach using MCMC simulations was employed to estimate the model parameters. The work is illustrated by a study of Snow petrels, in which survival probabilities are expressed as nonlinear functions of a climate covariate, using data from a 40-year study on marked individuals, nesting at Petrels Island, Terre Adelie.


Semi-Parametric Single-Index Two-Part Regression Models, Xiao-Hua Zhou, Hua Liang Dec 2004

Semi-Parametric Single-Index Two-Part Regression Models, Xiao-Hua Zhou, Hua Liang

UW Biostatistics Working Paper Series

In this paper, we proposed a semi-parametric single-index two-part regression model to weaken assumptions in parametric regression methods that were frequently used in the analysis of skewed data with additional zero values. The estimation procedure for the parameters of interest in the model was easily implemented. The proposed estimators were shown to be consistent and asymptotically normal. Through a simulation study, we showed that the proposed estimators have reasonable finite-sample performance. We illustrated the application of the proposed method in one real study on the analysis of health care costs.


The Proportional Odds Model For Assessing Rater Agreement With Multiple Modalities, Elizabeth Garrett-Mayer, Steven N. Goodman, Ralph H. Hruban Dec 2004

The Proportional Odds Model For Assessing Rater Agreement With Multiple Modalities, Elizabeth Garrett-Mayer, Steven N. Goodman, Ralph H. Hruban

Johns Hopkins University, Dept. of Biostatistics Working Papers

In this paper, we develop a model for evaluating an ordinal rating systems where we assume that the true underlying disease state is continuous in nature. Our approach in motivated by a dataset with 35 microscopic slides with 35 representative duct lesions of the pancreas. Each of the slides was evaluated by eight raters using two novel rating systems (PanIN illustrations and PanIN nomenclature),where each rater used each systems to rate the slide with slide identity masked between evaluations. We find that the two methods perform equally well but that differentiation of higher grade lesions is more consistent across raters …


Cross-Study Validation And Combined Analysis Of Gene Expression Microarray Data, Elizabeth Garrett-Mayer, Giovanni Parmigiani, Xiaogang Zhong, Leslie Cope, Edward Gabrielson Dec 2004

Cross-Study Validation And Combined Analysis Of Gene Expression Microarray Data, Elizabeth Garrett-Mayer, Giovanni Parmigiani, Xiaogang Zhong, Leslie Cope, Edward Gabrielson

Johns Hopkins University, Dept. of Biostatistics Working Papers

Investigations of transcript levels on a genomic scale using

hybridization-based arrays led to formidable advances in our

understanding of the biology of many human illnesses. At the same time, these investigations have generated controversy, because of the probabilistic nature of the conclusions, and the surfacing of noticeable discrepancies between the results of studies addressing the same biological question. In this article we present simple and effective data analysis and visualization tools for gauging the degree to which

the finding of one study are reproduced by others, and for integrating multiple studies in a single analysis.

We describe these approaches in …


On The Design Of Efficient Priority Rules For Secured Creditors: Empirical Evidence From A Change In Law, Clas Bergström, Theodore Eisenberg, Stefan Sundgren Dec 2004

On The Design Of Efficient Priority Rules For Secured Creditors: Empirical Evidence From A Change In Law, Clas Bergström, Theodore Eisenberg, Stefan Sundgren

Cornell Law Faculty Publications

This article assesses the effect of a reduction in secured creditor priority on distributions and administrative costs in liquidating bankruptcy cases by reporting the first empirical study of the effect of a priority change. Priority reform had redistributive effects in liquidating bankruptcy. As expected, average payments to general unsecured creditors were significantly higher after the reform than before the reform and payments to secured creditors decreased. Reform did not increase the size of the pie to be distributed in bankruptcy. Nor did it increase the direct costs of bankruptcy.


Initial-Value Problem For Three-Dimensional Disturbances In A Hypersonic Boundary Layer, Eric Forgoston, Anatoli Tumin Dec 2004

Initial-Value Problem For Three-Dimensional Disturbances In A Hypersonic Boundary Layer, Eric Forgoston, Anatoli Tumin

Department of Applied Mathematics and Statistics Faculty Scholarship and Creative Works

An initial-value problem is formulated for a threedimensional wave packet in a hypersonic boundary layer flow. The problem is solved using a Laplace transform with respect to time and Fourier transforms with respect to the streamwise and spanwise coordinates. The solution can be presented as a sum of modes consisting of continuous and discrete spectra of temporal stability theory. Two discrete modes, known as Mode S and Mode F, are of interest since they may be involved in a laminar-turbulent transition scenario. The continuous and discrete spectrum are analyzed numerically, and the following features are revealed: (1) the synchronism of …


The Effect Of Naocl, Ca(Oh)2, Mta And Mtad On Root Dentin Fracture Resistance, Sunil Ilapogu Dec 2004

The Effect Of Naocl, Ca(Oh)2, Mta And Mtad On Root Dentin Fracture Resistance, Sunil Ilapogu

Loma Linda University Electronic Theses, Dissertations & Projects

Various materials are used for the treatment of immature teeth that have been subjected to trauma or decay. It is not well established what effect these materials may have on the immature root dentin. A concern would be the resistance to fracture of the remaining root dentin. The purpose of this study was to compare the resistance of fracture of bovine teeth treated with sodium hypochlorite, calcium hydroxide, gray MTA and gray MTA in conjunction with MTAD after certain periods of time. One hundred and ninety five freshly extracted, intact bovine incisors were prepared using a modified Haapasalo and Orstavik …


Ip Algorithm Applied To Proteomics Data, Christopher Lee Green Nov 2004

Ip Algorithm Applied To Proteomics Data, Christopher Lee Green

Theses and Dissertations

Mass spectrometry has been used extensively in recent years as a valuable tool in the study of proteomics. However, the data thus produced exhibits hyper-dimensionality. Reducing the dimensionality of the data often requires the imposition of many assumptions which can be harmful to subsequent analysis. The IP algorithm is a dimension reduction algorithm, similar in purpose to latent variable analysis. It is based on the principle of maximum entropy and therefore imposes a minimum number of assumptions on the data. Partial Least Squares (PLS) is an algorithm commonly used with proteomics data from mass spectrometry in order to reduce the …


A Bayesian Mixture Model Relating Dose To Critical Organs And Functional Complication In 3d Conformal Radiation Therapy, Tim Johnson, Jeremy Taylor, Randall K. Ten Haken, Avraham Eisbruch Nov 2004

A Bayesian Mixture Model Relating Dose To Critical Organs And Functional Complication In 3d Conformal Radiation Therapy, Tim Johnson, Jeremy Taylor, Randall K. Ten Haken, Avraham Eisbruch

The University of Michigan Department of Biostatistics Working Paper Series

A goal of radiation therapy is to deliver maximum dose to the target tumor while minimizing complications due to irradiation of critical organs. Technological advances in 3D conformal radiation therapy has allowed great strides in realizing this goal, however complications may still arise. Critical organs may be adjacent to tumors or in the path of the radiation beam. Several mathematical models have been proposed that describe a relationship between dose and observed functional complication, however only a few published studies have successfully fit these models to data using modern statistical methods which make efficient use of the data. One complication …


Survival Analysis Using Auxiliary Variables Via Nonparametric Multiple Imputation, Chiu-Hsieh Hsu, Jeremy Taylor, Susan Murray, Daniel Commenges Nov 2004

Survival Analysis Using Auxiliary Variables Via Nonparametric Multiple Imputation, Chiu-Hsieh Hsu, Jeremy Taylor, Susan Murray, Daniel Commenges

The University of Michigan Department of Biostatistics Working Paper Series

We develop an approach, based on multiple imputation, that estimates the marginal survival distribution in survival analysis using auxiliary variable to recover information for censored observations. To conduct the imputation, we use two working survival model to define the nearest neighbor imputing risk set. One model is for the event times and the other for the censoring times. Based on the imputing risk set, two nonparametric multiple imputation methods are considered: risk set imputation, and Kaplan-Meier estimator. For both methods a future event or censoring time is imputed for each censored observation. With a categorical auxiliary variable, we show that …


Choice Of Monitoring Mechanism For Optimal Nonparametric Functional Estimation For Binary Data, Nicholas P. Jewell, Mark J. Van Der Laan, Stephen Shiboski Nov 2004

Choice Of Monitoring Mechanism For Optimal Nonparametric Functional Estimation For Binary Data, Nicholas P. Jewell, Mark J. Van Der Laan, Stephen Shiboski

U.C. Berkeley Division of Biostatistics Working Paper Series

Optimal designs of dose levels in order to estimate parameters from a model for binary response data have a long and rich history. These designs are based on parametric models. Here we consider fully nonparametric models with interest focused on estimation of smooth functionals using plug-in estimators based on the nonparametric maximum likelihood estimator. An important application of the results is the derivation of the optimal choice of the monitoring time distribution function for current status observation of a survival distribution. The optimal choice depends in a simple way on the dose response function and the form of the functional. …


On Marginalized Multilevel Models And Their Computation, Michael E. Griswold, Scott L. Zeger Nov 2004

On Marginalized Multilevel Models And Their Computation, Michael E. Griswold, Scott L. Zeger

Johns Hopkins University, Dept. of Biostatistics Working Papers

Clustered data analysis is characterized by the need to describe both systematic variation in a mean model and cluster-dependent random variation in an association model. Marginalized multilevel models embrace the robustness and interpretations of a marginal mean model, while retaining the likelihood inference capabilities and flexible dependence structures of a conditional association model. Although there has been increasing recognition of the attractiveness of marginalized multilevel models, there has been a gap in their practical application arising from a lack of readily available estimation procedures. We extend the marginalized multilevel model to allow for nonlinear functions in both the mean and …


Semiparametric Binary Regression Under Monotonicity Constraints, Moulinath Banerjee, Pinaki Biswas, Debashis Ghosh Nov 2004

Semiparametric Binary Regression Under Monotonicity Constraints, Moulinath Banerjee, Pinaki Biswas, Debashis Ghosh

The University of Michigan Department of Biostatistics Working Paper Series

Summary: We study a binary regression model where the response variable $\Delta$ is the indicator of an event of interest (for example, the incidence of cancer) and the set of covariates can be partitioned as $(X,Z)$ where $Z$ (real valued) is the covariate of primary interest and $X$ (vector valued) denotes a set of control variables. For any fixed $X$, the conditional probability of the event of interest is assumed to be a monotonic function of $Z$. The effect of the control variables is captured by a regression parameter $\beta$. We show that the baseline conditional probability function (corresponding to …


A Bayesian Method For Finding Interactions In Genomic Studies, Wei Chen, Debashis Ghosh, Trivellore E. Raghuanthan, Sharon Kardia Nov 2004

A Bayesian Method For Finding Interactions In Genomic Studies, Wei Chen, Debashis Ghosh, Trivellore E. Raghuanthan, Sharon Kardia

The University of Michigan Department of Biostatistics Working Paper Series

An important step in building a multiple regression model is the selection of predictors. In genomic and epidemiologic studies, datasets with a small sample size and a large number of predictors are common. In such settings, most standard methods for identifying a good subset of predictors are unstable. Furthermore, there is an increasing emphasis towards identification of interactions, which has not been studied much in the statistical literature. We propose a method, called BSI (Bayesian Selection of Interactions), for selecting predictors in a regression setting when the number of predictors is considerably larger than the sample size with a focus …


Deletion/Substitution/Addition Algorithm For Partitioning The Covariate Space In Prediction, Annette Molinaro, Mark J. Van Der Laan Nov 2004

Deletion/Substitution/Addition Algorithm For Partitioning The Covariate Space In Prediction, Annette Molinaro, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

We propose a new method for predicting censored (and non-censored) clinical outcomes from a highly-complex covariate space. Previously we suggested a unified strategy for predictor construction, selection, and performance assessment. Here we introduce a new algorithm which generates a piecewise constant estimation sieve of candidate predictors based on an intensive and comprehensive search over the entire covariate space. This algorithm allows us to elucidate interactions and correlation patterns in addition to main effects.


Confidence Intervals On Subsets May Be Misleading, Juliet Popper Shaffer Nov 2004

Confidence Intervals On Subsets May Be Misleading, Juliet Popper Shaffer

Journal of Modern Applied Statistical Methods

A combination of hypothesis testing and confidence interval construction is often used in social and behavioral science studies. Sometimes confidence intervals are computed or reported only if a null hypothesis is rejected, perhaps to see whether the range of values is of practical importance. Sometimes they are constructed or reported only if a null hypothesis is accepted, in order to assess the range of plausible nonnull values due to inadequate power to detect them. Even if always computed, they are interpreted differently, depending on whether the null value is or is not included. Furthermore, many studies in which the null …


Confidence Elicitation And Anchoring In The Respondent-Generated Intervals (Rgi) Protocol, Liping Chu, S. James Press, Judith M. Tanur Nov 2004

Confidence Elicitation And Anchoring In The Respondent-Generated Intervals (Rgi) Protocol, Liping Chu, S. James Press, Judith M. Tanur

Journal of Modern Applied Statistical Methods

The Respondent-Generated Intervals protocol (RGI) has been used to have respondents recall the answer to a factual question by giving not only a point estimate but also bounds within which they feel it is almost certain that the true value of the quantity being reported upon falls. The RGI protocol is elaborated in this article with the goal of improving the accuracy of the estimators by introducing cueing mechanisms to direct confident (and thus presumably accurate) respondents to give shorter intervals and less confident (and thus presumably less accurate) respondents to give longer ones.


Appeal Rates And Outcomes In Tried And Nontried Cases: Further Exploration Of Anti-Plaintiff Appellate Outcomes, Theodore Eisenberg Nov 2004

Appeal Rates And Outcomes In Tried And Nontried Cases: Further Exploration Of Anti-Plaintiff Appellate Outcomes, Theodore Eisenberg

Cornell Law Faculty Publications

Federal data sets covering district court and appellate court civil cases for cases terminating in fiscal years 1988 through 2000 are analyzed. Appeals are filed in 10.9 percent of filed cases, and 21.0 percent of cases if one limits the sample to cases with a definitive judgment for plaintiff or defendant. The appeal rate is 39.6 percent in tried cases compared to 10.0 percent of nontried cases. For cases with definitive judgments, the appeal filing rate is 19.0 percent in nontried cases and 40.9 percent in tried cases. Tried cases with definitive judgments are appealed to a conclusion on the …


Laboratory Routines Cause Animal Stress, Jonathan P. Balcombe, Neal D. Barnard, Chad Sandusky Nov 2004

Laboratory Routines Cause Animal Stress, Jonathan P. Balcombe, Neal D. Barnard, Chad Sandusky

Laboratory Experiments Collection

Eighty published studies were appraised to document the potential stress associated with three routine laboratory procedures commonly performed on animals: handling, blood collection, and orogastric gavage. We defined handling as any non-invasive manipulation occurring as part of routine husbandry, including lifting an animal and cleaning or moving an animal's cage. Significant changes in physiologic parameters correlated with stress (e.g., serum or plasma concentrations of corticosterone, glucose, growth hormone or prolactin, heart rate, blood pressure, and behavior) were associated with all three procedures in multiple species in the studies we examined. The results of these studies demonstrated that animals responded with …


Assessing Treatment Effects In Randomized Longitudinal Two-Group Designs With Missing Observations, James Algina, H. J. Keselman Nov 2004

Assessing Treatment Effects In Randomized Longitudinal Two-Group Designs With Missing Observations, James Algina, H. J. Keselman

Journal of Modern Applied Statistical Methods

SAS’s PROC MIXED can be problematic when analyzing data from randomized longitudinal two-group designs when observations are missing over time. Overall (1996, 1999) and colleagues found a number of procedures that are effective in controlling the number of false positives (Type I errors) and are yet sensitive (powerful) to detect treatment effects. Two favorable methods incorporate time in study and baseline scores to model the missing data mechanism; one method was a single-stage PROC MIXED ANCOVA solution and the other was a two-stage endpoint analysis using the change scores as dependent scores. Because the twostage approach can lack sensitivity to …


An Overview Of The Respondent-Generated Intervals (Rgi) Approach To Sample Surveys, S. James Press, Judith M. Tanur Nov 2004

An Overview Of The Respondent-Generated Intervals (Rgi) Approach To Sample Surveys, S. James Press, Judith M. Tanur

Journal of Modern Applied Statistical Methods

This article brings together many years of research on the Respondent-Generated Intervals (RGI) approach to recall in factual sample surveys. Additionally presented is new research on the use of RGI in opinion surveys and the use of RGI with gamma-distributed data. The research combines Bayesian hierarchical modeling with various cognitive aspects of sample surveys.


Multivariate Contrasts For Repeated Measures Designs Under Assumption Violations, Lisa M. Lix, Aynslie M. Hinds Nov 2004

Multivariate Contrasts For Repeated Measures Designs Under Assumption Violations, Lisa M. Lix, Aynslie M. Hinds

Journal of Modern Applied Statistical Methods

Conventional and approximate degrees of freedom procedures for testing multivariate interaction contrasts in groups by trials repeated measures designs were compared under assumption violation conditions. Procedures were based on either least-squares or robust estimators. Power generally favored test procedures based on robust estimators for non-normal distributions, but was influenced by the degree of departure from non-normality, definition of power, and magnitude of the multivariate effect size.


Modeling Incomplete Longitudinal Data, Hakan Demirtas Nov 2004

Modeling Incomplete Longitudinal Data, Hakan Demirtas

Journal of Modern Applied Statistical Methods

This article presents a review of popular parametric, semiparametric and ad-hoc approaches for analyzing incomplete longitudinal data.