Open Access. Powered by Scholars. Published by Universities.®

Statistics and Probability Commons

Open Access. Powered by Scholars. Published by Universities.®

Statistical Theory

2010

Institution
Keyword
Publication
Publication Type
File Type

Articles 61 - 90 of 91

Full-Text Articles in Statistics and Probability

Median-Unbiased Optimal Smoothing And Trend Extraction, Dimitrios D. Thomakos May 2010

Median-Unbiased Optimal Smoothing And Trend Extraction, Dimitrios D. Thomakos

Journal of Modern Applied Statistical Methods

The problem of smoothing a time series for extracting its low frequency characteristics, collectively called its trend, is considered. A competitive approach is proposed and compared with existing methods in choosing the optimal degree of smoothing based on the distribution of the residuals from the smooth trend.


An Evaluation Of Multiple Imputation For Meta-Analytic Structural Equation Modeling, Carolyn F. Furlow, S. Natasha Beretvas May 2010

An Evaluation Of Multiple Imputation For Meta-Analytic Structural Equation Modeling, Carolyn F. Furlow, S. Natasha Beretvas

Journal of Modern Applied Statistical Methods

A simulation study was used to evaluate multiple imputation (MI) to handle MCAR correlations in the first step of meta-analytic structural equation modeling: the synthesis of the correlation matrix and the test of homogeneity. No substantial parameter bias resulted from using MI. Although some SE bias was found for meta-analyses involving smaller numbers of studies, the homogeneity test was never rejected when using MI.


Can Specification Searches Be Useful For Hypothesis Generation?, Samuel B. Green, Marilyn S. Thompson May 2010

Can Specification Searches Be Useful For Hypothesis Generation?, Samuel B. Green, Marilyn S. Thompson

Journal of Modern Applied Statistical Methods

Previous studies suggest that results from specification searches, as typically employed in structural equation modeling, should not be used to reach strong research conclusions due to their poor reliability. Analyses of computer generated data indicate that search results can be sufficiently reliable for exploratory purposes with properly designed and analyzed studies.


Measuring Openness, Gaetano Ferrieri May 2010

Measuring Openness, Gaetano Ferrieri

Journal of Modern Applied Statistical Methods

A method for measuring international openness is elaborated. This synthetic indicator measures the capacity of countries for a given phenomenon adjusted for their weight in the same phenomenon. The method implemented and applied to international trade and illustrated here as a case study in merchandise exports, has a wide range of applications in the socio-economic field.


Another Look At Resampling: Replenishing Small Samples With Virtual Data Through S-Smart, Haiyan Bai, Wei Pan, Leigh Lihshing Wang, Phillip Neal Ritchey May 2010

Another Look At Resampling: Replenishing Small Samples With Virtual Data Through S-Smart, Haiyan Bai, Wei Pan, Leigh Lihshing Wang, Phillip Neal Ritchey

Journal of Modern Applied Statistical Methods

A new resampling method is introduced to generate virtual data through a smoothing technique for replenishing small samples. The replenished analyzable sample retains the statistical properties of the original small sample, has small standard errors and possesses adequate statistical power.


Shrinkage Estimation In The Inverse Rayleigh Distribution, Gyan Prakash May 2010

Shrinkage Estimation In The Inverse Rayleigh Distribution, Gyan Prakash

Journal of Modern Applied Statistical Methods

The properties of the shrinkage test–estimators of the parameter were studied for an inverse Rayleigh model under the asymmetric loss function. Both the single and double–stage shrinkage test–estimators are considered.


Nonlinear Parameterization In Bi-Criteria Sample Balancing, Stan Lipovetsky May 2010

Nonlinear Parameterization In Bi-Criteria Sample Balancing, Stan Lipovetsky

Journal of Modern Applied Statistical Methods

Sample balancing is widely used in applied research to adjust a sample data to achieve better correspondence to Census statistics. The classic Deming-Stephan iterative proportional approach finds the weights of observations by fitting the cross-tables of sample counts to known margins. This work considers a bi-criteria objective for finding weights with maximum possible effective base size. This approach is presented as a ridge regression with the exponential nonlinear parameterization that produces nonnegative weights for sample balancing.


Combining Independent Tests Of Conditional Shifted Exponential Distribution, Abedel-Qader S. Al-Masri May 2010

Combining Independent Tests Of Conditional Shifted Exponential Distribution, Abedel-Qader S. Al-Masri

Journal of Modern Applied Statistical Methods

The problem of combining n independent tests as n→∞ for testing that variables are uniformly distributed over the interval (0, 1) compared to their having a conditional shifted exponential distribution with probability density function f (xθ ) = e−(x−γθ) , x ≥γθ , θ ∈[a,∞), a ≥ 0 was studied. This was examined for the case where θ1, θ2, … are distributed according to the distribution function (DF) F and when the DF is Gamma (1, 2). Six omnibus methods were compared via the Bahadur efficiency. It is shown that, as γ → 0 and …


Estimations On The Generalized Exponential Distribution Using Grouped Data, Hassan Pazira, Parviz Nasiri May 2010

Estimations On The Generalized Exponential Distribution Using Grouped Data, Hassan Pazira, Parviz Nasiri

Journal of Modern Applied Statistical Methods

Classical and Bayesian estimators are obtained for the shape parameter of the Generalized-Exponential distribution under grouped data. In Bayesian estimation, three types of loss functions are considered: the Squared Error loss function which is classified as a symmetric function, the LINEX and Precautionary loss functions which are asymmetric. These estimators are compared with the corresponding estimators derived from un-grouped data empirically using Monte-Carlo simulation.


A Comparative Study For Bandwidth Selection In Kernel Density Estimation, Omar M. Eidous, Mohammad Abd Alrahem Shafeq Marie, Mohammed H. Baker Al-Haj Ebrahem May 2010

A Comparative Study For Bandwidth Selection In Kernel Density Estimation, Omar M. Eidous, Mohammad Abd Alrahem Shafeq Marie, Mohammed H. Baker Al-Haj Ebrahem

Journal of Modern Applied Statistical Methods

Nonparametric kernel density estimation method does not make any assumptions regarding the functional form of curves of interest; hence it allows flexible modeling of data. A crucial problem in kernel density estimation method is how to determine the bandwidth (smoothing) parameter. This article examines the most important bandwidth selection methods, in particular, least squares cross-validation, biased crossvalidation, direct plug-in, solve-the-equation rules and contrast methods. Methods are described and expressions are presented. The main practical contribution is a comparative simulation study that aims to isolate the most promising methods. The performance of each method is evaluated on the basis of the …


Applying Multiple Imputation With Geostatistical Models To Account For Item Nonresponse In Environmental Data, Breda Munoz, Virginia M. Lesser, Ruben A. Smith May 2010

Applying Multiple Imputation With Geostatistical Models To Account For Item Nonresponse In Environmental Data, Breda Munoz, Virginia M. Lesser, Ruben A. Smith

Journal of Modern Applied Statistical Methods

Methods proposed to solve the missing data problem in estimation procedures should consider the type of missing data, the missing data mechanism, the sampling design and the availability of auxiliary variables correlated with the process of interest. This article explores the use of geostatistical models with multiple imputation to deal with missing data in environmental surveys. The method is applied to the analysis of data generated from a probability survey to estimate Coho salmon abundance in streams located in western Oregon watersheds.


On The Appropriate Transformation Technique And Model Selection In Forecasting Economic Time Series: An Application To Botswana Gdp Data, D. K. Shangodoyin, K. Setlhare, K. K. Moseki, K. Sediakgotla May 2010

On The Appropriate Transformation Technique And Model Selection In Forecasting Economic Time Series: An Application To Botswana Gdp Data, D. K. Shangodoyin, K. Setlhare, K. K. Moseki, K. Sediakgotla

Journal of Modern Applied Statistical Methods

Selected data transformation techniques in time series modeling are evaluated using real-life data on Botswana Gross Domestic Product (GDP). The transformation techniques considered were modified, although reasonable estimates of the original with no significant difference at α = 0.05 level were obtained: minimizing square of first difference (MFD) and minimizing square of second difference (MSD) provided the best transformation for GDP, whereas the Goldstein and Khan (GKM) method had a deficiency of losing data points. The Box-Jenkins procedure was adapted to fit suitable ARIMA (p, d, q) models to both the original and transformed series, with AIC and SIC as …


An Equivalence Test Based On N And P, Markus Neuhäeuser May 2010

An Equivalence Test Based On N And P, Markus Neuhäeuser

Journal of Modern Applied Statistical Methods

An equivalence test is proposed which is based on the P-value of a test for a difference and the sample size. This test may be especially appropriate for an exploratory re-analysis if only a non-significant test for a difference was reported. Thus, neither a confidence interval is available, nor is there access to the raw data. The test is illustrated using two examples; for both applications the smallest equivalence range for which equivalence could be demonstrated is calculated.


Jmasm30 Pi-Lca: A Sas Program Computing The Two-Point Mixture Index Of Fit For Two-Class Lca Models With Dichotomous Variables (Sas), Dongquan Zhang, C. Mitchell Dayton May 2010

Jmasm30 Pi-Lca: A Sas Program Computing The Two-Point Mixture Index Of Fit For Two-Class Lca Models With Dichotomous Variables (Sas), Dongquan Zhang, C. Mitchell Dayton

Journal of Modern Applied Statistical Methods

The two-point mixture index of fit enjoys some desirable features in model fit assessment and model selection, however, a need exists for efficient computational strategies. Applying an NLP algorithm, a program using the SAS matrix language is presented to estimate the two-point index of fit for two-class LCA models with dichotomous response variables. The program offers a tool to compute π ∗ for twoclass models and it also provides an alternative program for conducting latent class analysis with SAS. This study builds a foundation for further research on computational approaches for M-class models.


Assessing Noninferiority In A Three-Arm Trial Using The Bayesian Approach, Pulak Ghosh, Farouk S. Nathoo, Mithat Gonen, Ram C. Tiwari May 2010

Assessing Noninferiority In A Three-Arm Trial Using The Bayesian Approach, Pulak Ghosh, Farouk S. Nathoo, Mithat Gonen, Ram C. Tiwari

Memorial Sloan-Kettering Cancer Center, Dept. of Epidemiology & Biostatistics Working Paper Series

Non-inferiority trials, which aim to demonstrate that a test product is not worse than a competitor by more than a pre-specified small amount, are of great importance to the pharmaceutical community. As a result, methodology for designing and analyzing such trials is required, and developing new methods for such analysis is an important area of statistical research. The three-arm clinical trial is usually recommended for non-inferiority trials by the Food and Drug Administration (FDA). The three-arm trial consists of a placebo, a reference, and an experimental treatment, and simultaneously tests the superiority of the reference over the placebo along with …


Ranked Set Sampling Using Auxiliary Variables Of A Randomized Response Procedure For Estimating The Mean Of A Sensitive Quantitative Character, Carlos N. Bouza May 2010

Ranked Set Sampling Using Auxiliary Variables Of A Randomized Response Procedure For Estimating The Mean Of A Sensitive Quantitative Character, Carlos N. Bouza

Journal of Modern Applied Statistical Methods

The analysis of the behavior of estimators of the mean of a sensitive variable is considered when a randomized response procedure is used. The results deal with the inference based on simple random sampling with replacement study design. A study of the behavior of the procedures for a ranked set sampling design is developed. A gain in accuracy is generally associated with the proposed alternative model.


Derivation Of Mass Independent Quantum Treatment Of Phenomenon, David Parker May 2010

Derivation Of Mass Independent Quantum Treatment Of Phenomenon, David Parker

Journal of Modern Applied Statistical Methods

The derivation and applications is presented of a spatial variable or spatial radius which is related to the inertia or mass-energy of any quantum body by a Lorentz invariant relation. Mass independent DeBroglie and Schroedinger equations are derived and applied to the resolution of the linguistic incompatibility between quantum theory and the geometrical weak equivalence principle. The equivalence principle is restated in terms of the spatial radius. The gravitational attraction between bodies and the relativistic energy are both presented in terms of the spatial radius follows. The ratio of the gravitational force to the Coulomb force at the Planck scale …


Beyond Alpha: Lower Bounds For The Reliability Of Tests, Nol Bendermacher May 2010

Beyond Alpha: Lower Bounds For The Reliability Of Tests, Nol Bendermacher

Journal of Modern Applied Statistical Methods

The most common lower bound to the reliability of a test is Cronbach’s alpha. However, several lower bounds exist that are definitely better, that is, higher than alpha. An overview is given as well as an algorithm to find the best: the greatest lower bound.


Assessing Classification Bias In Latent Class Analysis: Comparing Resubstitution And Leave-One-Out Methods, Marc H. Kroopnick, Jinsong Chen, Jaehwa Choi, C. Mitchell Dayton May 2010

Assessing Classification Bias In Latent Class Analysis: Comparing Resubstitution And Leave-One-Out Methods, Marc H. Kroopnick, Jinsong Chen, Jaehwa Choi, C. Mitchell Dayton

Journal of Modern Applied Statistical Methods

This Monte Carlo simulation study assessed the degree of classification success associated with resubstitution methods in latent class analysis (LCA) and compared those results to those of the leaveone- out (L-O-O) method for computing classification success. Specifically, this study considered a latent class model with two classes, dichotomous manifest variables, restricted conditional probabilities for each latent class and relatively small sample sizes. The performance of resubstitution and L-O-O methods on the lambda classification index was assessed by examining the degree of bias.


A New Biased Estimator Derived From Principal Component Regression Estimator, Set Foong Ng, Heng Chin Low, Soon Hoe Quah May 2010

A New Biased Estimator Derived From Principal Component Regression Estimator, Set Foong Ng, Heng Chin Low, Soon Hoe Quah

Journal of Modern Applied Statistical Methods

A new biased estimator obtained by combining the Principal Component Regression Estimator and the special case of Liu-type estimator is proposed. The properties of the new estimator are derived and comparisons between the new estimator and other estimators in terms of mean squared error are presented.


Symmetry Plus Quasi Uniform Association Model And Its Orthogonal Decomposition For Square Contingency Tables, Kouji Yamamoto, Sadao Tomizawa May 2010

Symmetry Plus Quasi Uniform Association Model And Its Orthogonal Decomposition For Square Contingency Tables, Kouji Yamamoto, Sadao Tomizawa

Journal of Modern Applied Statistical Methods

A model is proposed having the structure of both symmetry and quasi-uniform association (SQU model) and provides a decomposition of the SQU model. It is also shown with examples that the test statistic for goodness-of-fit of the SQU model is asymptotically equivalent to the sum of those for the decomposed models.


Optimal Meter Placement By Reconciliation Conventional Measurements And Phasor Measurement Units (Pmus), Reza Kaihani, Ali Reza Seifi May 2010

Optimal Meter Placement By Reconciliation Conventional Measurements And Phasor Measurement Units (Pmus), Reza Kaihani, Ali Reza Seifi

Journal of Modern Applied Statistical Methods

The success of state estimation depends on the number, type and location of the established meters and RTUs on the system. A new method by incorporating conventional measurements and New Technology of Phasor Measurement Units (PMU) is proposed. Conventional meters (power injection and power flow measurements) are allocated in order to reduce the number of meters, RTUs, critical measurements, critical sets and leverage points, and also to improve the numerical stability of equations; a genetic algorithm is used for optimization. A second step involves adding PMUs in areas in which it is expected that the accuracy of state estimation will …


Nonparametric Regression With Missing Outcomes Using Weighted Kernel Estimating Equations, Lu Wang, Andrea Rotnitzky, Xihong Lin Apr 2010

Nonparametric Regression With Missing Outcomes Using Weighted Kernel Estimating Equations, Lu Wang, Andrea Rotnitzky, Xihong Lin

Harvard University Biostatistics Working Paper Series

No abstract provided.


Simple Examples Of Estimating Causal Effects Using Targeted Maximum Likelihood Estimation, Michael Rosenblum, Mark J. Van Der Laan Mar 2010

Simple Examples Of Estimating Causal Effects Using Targeted Maximum Likelihood Estimation, Michael Rosenblum, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

We present a brief overview of targeted maximum likelihood for estimating the causal effect of a single time point treatment and of a two time point treatment. We focus on simple examples demonstrating how to apply the methodology developed in (van der Laan and Rubin, 2006; Moore and van der Laan, 2007; van der Laan, 2010a,b). We include R code for the single time point case.


Likelihood Ratio Testing For Admixture Models With Application To Genetic Linkage Analysis, Chong-Zhi Di, Kung-Yee Liang Mar 2010

Likelihood Ratio Testing For Admixture Models With Application To Genetic Linkage Analysis, Chong-Zhi Di, Kung-Yee Liang

Johns Hopkins University, Dept. of Biostatistics Working Papers

We consider likelihood ratio tests (LRT) and their modifications for homogeneity in admixture models. The admixture model is a special case of two component mixture model, where one component is indexed by an unknown parameter while the parameter value for the other component is known. It has been widely used in genetic linkage analysis under heterogeneity, in which the kernel distribution is binomial. For such models, it is long recognized that testing for homogeneity is nonstandard and the LRT statistic does not converge to a conventional 2 distribution. In this paper, we investigate the asymptotic behavior of the LRT for …


Graphical Procedures For Evaluating Overall And Subject-Specific Incremental Values From New Predictors With Censored Event Time Data, Hajime Uno, Tianxi Cai, Lu Tian, L. J. Wei Mar 2010

Graphical Procedures For Evaluating Overall And Subject-Specific Incremental Values From New Predictors With Censored Event Time Data, Hajime Uno, Tianxi Cai, Lu Tian, L. J. Wei

Harvard University Biostatistics Working Paper Series

No abstract provided.


A New Class Of Dantzig Selectors For Censored Linear Regression Models, Yi Li, Lee Dicker, Sihai Dave Zhao Mar 2010

A New Class Of Dantzig Selectors For Censored Linear Regression Models, Yi Li, Lee Dicker, Sihai Dave Zhao

Harvard University Biostatistics Working Paper Series

No abstract provided.


Computing Highly Accurate Or Exact P-Values Using Importance Sampling (Revised), Chris Lloyd Jan 2010

Computing Highly Accurate Or Exact P-Values Using Importance Sampling (Revised), Chris Lloyd

Chris J. Lloyd

Especially for discrete data, standard first order P-values can suffer from poor accuracy, even for quite large sample sizes. Moreover, different test statistics can give practically different results. There are several approaches to computing P-values which do not suffer these defects, such as parametric bootstrap P-values or the partially maximised P-values of Berger & Boos (1994).

Both these methods require computing the exact tail probability of the approximate P-value as a function of the nuisance parameter/s, known as the significance profile. For most practical problems this is not computationally feasible. I develop an importance sampling approach to this problem. A …


Penalized Functional Regression, Jeff Goldsmith, Jennifer Feder, Ciprian M. Crainiceanu, Brian Caffo, Daniel Reich Jan 2010

Penalized Functional Regression, Jeff Goldsmith, Jennifer Feder, Ciprian M. Crainiceanu, Brian Caffo, Daniel Reich

Johns Hopkins University, Dept. of Biostatistics Working Papers

We develop fast fitting methods for generalized functional linear models. An undersmooth of the functional predictor is obtained by projecting on a large number of smooth eigenvectors and the coefficient function is estimated using penalized spline regression. Our method can be applied to many functional data designs including functions measured with and without error, sparsely or densely sampled. The methods also extend to the case of multiple functional predictors or functional predictors with a natural multilevel structure. Our approach can be implemented using standard mixed effects software and is computationally fast. Our methodology is motivated by a diffusion tensor imaging …


Regression Adjustment And Stratification By Propensty Score In Treatment Effect Estimation, Jessica A. Myers, Thomas A. Louis Jan 2010

Regression Adjustment And Stratification By Propensty Score In Treatment Effect Estimation, Jessica A. Myers, Thomas A. Louis

Johns Hopkins University, Dept. of Biostatistics Working Papers

Propensity score adjustment of effect estimates in observational studies of treatment is a common technique used to control for bias in treatment assignment. In situations where matching on propensity score is not possible or desirable, regression adjustment and stratification are two options. Regression adjustment is used most often and can be highly efficient, but it can lead to biased results when model assumptions are violated. Validity of the stratification approach depends on fewer model assumptions, but is less efficient than regression adjustment when the regression assumptions hold. To investigate these issues, by simulation we compare stratification and regression adjustments. We …