Open Access. Powered by Scholars. Published by Universities.®

Statistics and Probability Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 10 of 10

Full-Text Articles in Statistics and Probability

Obtaining Critical Values For Test Of Markov Regime Switching, Douglas G. Steigerwald, Valerie Bostwick Oct 2012

Obtaining Critical Values For Test Of Markov Regime Switching, Douglas G. Steigerwald, Valerie Bostwick

Douglas G. Steigerwald

For Markov regime-switching models, testing for the possible presence of more than one regime requires the use of a non-standard test statistic. Carter and Steigerwald (forthcoming, Journal of Econometric Methods) derive in detail the analytic steps needed to implement the test ofMarkov regime-switching proposed by Cho and White (2007, Econometrica). We summarize the implementation steps and address the computational issues that arise. A new command to compute regime-switching critical values, rscv, is introduced and presented in the context of empirical research.


Big Data And The Future, Sherri Rose Jul 2012

Big Data And The Future, Sherri Rose

Sherri Rose

No abstract provided.


Targeted Maximum Likelihood Estimation For Dynamic Treatment Regimes In Sequential Randomized Controlled Trials, Paul Chaffee, Mark J. Van Der Laan Jun 2012

Targeted Maximum Likelihood Estimation For Dynamic Treatment Regimes In Sequential Randomized Controlled Trials, Paul Chaffee, Mark J. Van Der Laan

Paul H. Chaffee

Sequential Randomized Controlled Trials (SRCTs) are rapidly becoming essential tools in the search for optimized treatment regimes in ongoing treatment settings. Analyzing data for multiple time-point treatments with a view toward optimal treatment regimes is of interest in many types of afflictions: HIV infection, Attention Deficit Hyperactivity Disorder in children, leukemia, prostate cancer, renal failure, and many others. Methods for analyzing data from SRCTs exist but they are either inefficient or suffer from the drawbacks of estimating equation methodology. We describe an estimation procedure, targeted maximum likelihood estimation (TMLE), which has been fully developed and implemented in point treatment settings, …


Variances For Maximum Penalized Likelihood Estimates Obtained Via The Em Algorithm, Mark Segal, Peter Bacchetti, Nicholas Jewell Apr 2012

Variances For Maximum Penalized Likelihood Estimates Obtained Via The Em Algorithm, Mark Segal, Peter Bacchetti, Nicholas Jewell

Mark R Segal

We address the problem of providing variances for parameter estimates obtained under a penalized likelihood formulation through use of the EM algorithm. The proposed solution represents a synthesis of two existent techniques. Firstly, we exploit the supplemented EM algorithm developed in Meng and Rubin (1991) that provides variance estimates for maximum likelihood estimates obtained via the EM algorithm. Their procedure relies on evaluating the Jacobian of the mapping induced by the EM algorithm. Secondly, we utilize a result from Green (1990) that provides an expression for the Jacobian of the mapping induced by the EM algorithm applied to a penalized …


Backcalculation Of Hiv Infection Rates, Peter Bacchetti, Mark Segal, Nicholas Jewell Apr 2012

Backcalculation Of Hiv Infection Rates, Peter Bacchetti, Mark Segal, Nicholas Jewell

Mark R Segal

Backcalculation is an important method of reconstructing past rates of human immunodeficiency virus (HIV) infection and for estimating current prevalence of HIV infection and future incidence of acquired immunodeficiency syndrome (AIDS). This paper reviews the backcalculation techniques, focusing on the key assumptions of the method, including the necessary information regarding incubation, reporting delay, and models for the infection curve. A summary is given of the extent to which the appropriate external information is available and whether checks of the relevant assumptions are possible through use of data on AIDS incidence from surveillance systems. A likelihood approach to backcalculation is described …


Loss Function Based Ranking In Two-Stage, Hierarchical Models, Rongheng Lin, Thomas A. Louis, Susan M. Paddock, Greg Ridgeway Mar 2012

Loss Function Based Ranking In Two-Stage, Hierarchical Models, Rongheng Lin, Thomas A. Louis, Susan M. Paddock, Greg Ridgeway

Rongheng Lin

Several authors have studied the performance of optimal, squared error loss (SEL) estimated ranks. Though these are effective, in many applications interest focuses on identifying the relatively good (e.g., in the upper 10%) or relatively poor performers. We construct loss functions that address this goal and evaluate candidate rank estimates, some of which optimize specific loss functions. We study performance for a fully parametric hierarchical model with a Gaussian prior and Gaussian sampling distributions, evaluating performance for several loss functions. Results show that though SEL-optimal ranks and percentiles do not specifically focus on classifying with respect to a percentile cut …


Simulating Non-Normal Distributions With Specified L-Moments And L-Correlations, Todd C. Headrick, Mohan D. Pant Jan 2012

Simulating Non-Normal Distributions With Specified L-Moments And L-Correlations, Todd C. Headrick, Mohan D. Pant

Todd Christopher Headrick

This paper derives a procedure for simulating continuous non-normal distributions with specified L-moments and L-correlations in the context of power method polynomials of order three. It is demonstrated that the proposed procedure has computational advantages over the traditional product-moment procedure in terms of solving for intermediate correlations. Simulation results also demonstrate that the proposed L-moment-based procedure is an attractive alternative to the traditional procedure when distributions with more severe departures from normality are considered. Specifically, estimates of L-skew and L-kurtosis are superior to the conventional estimates of skew and kurtosis in terms of both relative bias and relative standard error. …


Proportional Mean Residual Life Model For Right-Censored Length-Biased Data, Gary Kwun Chuen Chan, Ying Qing Chen, Chongzhi Di Jan 2012

Proportional Mean Residual Life Model For Right-Censored Length-Biased Data, Gary Kwun Chuen Chan, Ying Qing Chen, Chongzhi Di

Chongzhi Di

To study disease association with risk factors in epidemiologic studies, cross-sectional sampling is often more focused and less costly for recruiting study subjects who have already experienced initiating events. For time-to-event outcome, however, such a sampling strategy may be length-biased. Coupled with censoring, analysis of length-biased data can be quite challenging, due to the so-called “induced informative censoring” in which the survival time and censoring time are correlated through a common backward recurrence time. We propose to use the proportional mean residual life model of Oakes and Dasu (1990) for analysis of censored length-biased survival data. Several nonstandard data structures, …


Testing For Regime Swtiching: A Comment, Douglas Steigerwald, Andrew Carter Dec 2011

Testing For Regime Swtiching: A Comment, Douglas Steigerwald, Andrew Carter

Douglas G. Steigerwald

An autoregressive model with Markov-regime switching is analyzed that reflects on the properties of the quasi-likelihood ratio test developed by Cho and White (2007). For such a model, we show that consistency of the quasi-maximum likelihood estimator for the population parameter values, on which consistency of the test is based, does not hold. We describe a condition that ensures consistency of the estimator and discuss the consistency of the test in the absence of consistency of the estimator.


Some Non-Asymptotic Properties Of Parametric Bootstrap P-Values, Chris Lloyd Dec 2011

Some Non-Asymptotic Properties Of Parametric Bootstrap P-Values, Chris Lloyd

Chris J. Lloyd

The bootstrap P-value is the exact tail probability of a test statistic, cal-culated assuming the nuisance parameter equals the null maximum likelihood (ML) estimate. For discrete data, bootstrap P-values perform amazingly well even for small samples, even as standard first order methods perform surprisingly poorly. Why is this? Detailed numerical calculations in Lloyd (2012a) strongly suggest that the good performance of bootstrap is not explained by asymptotics. In this paper, I establish several desirable non-asymptotic properties of bootstrap P-values. The most important of these is that bootstrap will correct ‘bad’ ordering of the sample space which leads to a more …