Open Access. Powered by Scholars. Published by Universities.®

Applied Statistics Commons

Open Access. Powered by Scholars. Published by Universities.®

3,524 Full-Text Articles 4,908 Authors 2,834,925 Downloads 168 Institutions

All Articles in Applied Statistics

Faceted Search

3,524 full-text articles. Page 103 of 108.

Reducing Selection Bias In Analyzing Longitudinal Health Data With High Mortality Rates, Xian Liu, Charles C. Engel, Han Kang, Kristie L. Gore 2010 Uniformed Services University of the Health Sciences, Bethesda MD and Walter Reed National Military Medical Center, Bethesda MD

Reducing Selection Bias In Analyzing Longitudinal Health Data With High Mortality Rates, Xian Liu, Charles C. Engel, Han Kang, Kristie L. Gore

Journal of Modern Applied Statistical Methods

Two longitudinal regression models, one parametric and one nonparametric, are developed to reduce selection bias when analyzing longitudinal health data with high mortality rates. The parametric mixed model is a two-step linear regression approach, whereas the nonparametric mixed-effects regression model uses a retransformation method to handle random errors across time.


Men In Black: The Impact Of New Contracts On Football Referees’ Performances, Babatunde Buraimo, Alex Bryson, Rob Simmons 2010 University of Central Lancashire

Men In Black: The Impact Of New Contracts On Football Referees’ Performances, Babatunde Buraimo, Alex Bryson, Rob Simmons

Dr Babatunde Buraimo

No abstract provided.


Application Of The Fractional Diffusion Equation For Predicting Market Behaviour, Jonathan Blackledge 2010 Technological University Dublin

Application Of The Fractional Diffusion Equation For Predicting Market Behaviour, Jonathan Blackledge

Articles

Most Financial modelling system rely on an underlying hypothesis known as the Eficient Market Hypothesi (EMH) including the famous BlackScholes formula for placing an option. However, the EMH has a fundamental flaw: it is based on the assumption that economic processes are normally distributed and it has long been known that this is not the case. This fundamental assumption leads to a number of shortcomings associated with using the EMH to analyse financial data which includes failure to predict the future volatility of a market share value. This paper introduces a new financial risk assessment model based on Levy statistics …


Statistical Image Recovery From Laser Speckle Patterns With Polarization Diversity, Donald B. Dixon 2010 Air Force Institute of Technology

Statistical Image Recovery From Laser Speckle Patterns With Polarization Diversity, Donald B. Dixon

Theses and Dissertations

This research extends the theory and understanding of the laser speckle imaging technique. This non-traditional imaging technique may be employed to improve space situational awareness and image deep space objects from a ground-based sensor system. The use of this technique is motivated by the ability to overcome aperture size limitations and the distortion effects from Earth’s atmosphere. Laser speckle imaging is a lensless, coherent method for forming two-dimensional images from their autocorrelation functions. Phase retrieval from autocorrelation data is an ill-posed problem where multiple solutions exist. This research introduces polarization diversity as a method for obtaining additional information so the …


Early Stopping Of A Neural Network Via The Receiver Operating Curve., Daoping Yu 2010 East Tennessee State University

Early Stopping Of A Neural Network Via The Receiver Operating Curve., Daoping Yu

Electronic Theses and Dissertations

This thesis presents the area under the ROC (Receiver Operating Characteristics) curve, or abbreviated AUC, as an alternate measure for evaluating the predictive performance of ANNs (Artificial Neural Networks) classifiers. Conventionally, neural networks are trained to have total error converge to zero which may give rise to over-fitting problems. To ensure that they do not over fit the training data and then fail to generalize well in new data, it appears effective to stop training as early as possible once getting AUC sufficiently large via integrating ROC/AUC analysis into the training process. In order to reduce learning costs involving the …


Mixture Of Factor Analyzers With Information Criteria And The Genetic Algorithm, Esra Turan 2010 University of Tennessee, Knoxville

Mixture Of Factor Analyzers With Information Criteria And The Genetic Algorithm, Esra Turan

Doctoral Dissertations

In this dissertation, we have developed and combined several statistical techniques in Bayesian factor analysis (BAYFA) and mixture of factor analyzers (MFA) to overcome the shortcoming of these existing methods. Information Criteria are brought into the context of the BAYFA model as a decision rule for choosing the number of factors m along with the Press and Shigemasu method, Gibbs Sampling and Iterated Conditional Modes deterministic optimization. Because of sensitivity of BAYFA on the prior information of the factor pattern structure, the prior factor pattern structure is learned directly from the given sample observations data adaptively using Sparse Root algorithm. …


Arima Model For Forecasting Poisson Data: Application To Long-Term Earthquake Predictions, Wangdong Fu 2010 University of Nevada, Las Vegas

Arima Model For Forecasting Poisson Data: Application To Long-Term Earthquake Predictions, Wangdong Fu

UNLV Theses, Dissertations, Professional Papers, and Capstones

Earthquakes that occurred worldwide during the period of 1896 to 2009 with magnitude greater than or equal to 8.0 on the Richter scale are assumed to follow a Poisson process. Autoregressive Integrated Moving Average models are presented to fit the empirical recurrence rates, and to predict future large earthquakes. We show valuable modeling and computational techniques for the point processes and time series data. Specifically, for the proposed methodology, we address the following areas: data management and graphic presentation, model fitting and selection, model validation, model and data sensitivity analysis, and forecasting.


Estimating Confidence Intervals For Eigenvalues In Exploratory Factor Analysis, Ross Larsen, Russell Warne 2010 Brigham Young University - Provo

Estimating Confidence Intervals For Eigenvalues In Exploratory Factor Analysis, Ross Larsen, Russell Warne

Russell T Warne

Exploratory factor analysis (EFA) has become a common procedure in educational and psychological research. In the course of performing an EFA, researchers often base the decision of how many factors to retain on the eigenvalues for the factors. However, many researchers do not realize that eigenvalues, like all sample statistics, are subject to sampling error, which means that confidence intervals (CIs) can be estimated for each eigenvalue. In the present article, we demonstrate two methods of estimating CIs for eigenvalues: one based on the mathematical properties of the central limit theorem, and the other based on bootstrapping. References to appropriate …


The Generation Of Domestic Electricity Load Profiles Through Markov Chain Modelling, Aidan Duffy, Fintan McLoughlin, Michael Conlon 2010 Technological University Dublin

The Generation Of Domestic Electricity Load Profiles Through Markov Chain Modelling, Aidan Duffy, Fintan Mcloughlin, Michael Conlon

Conference Papers

Micro-generation technologies such as photovoltaics and micro-wind power are becoming increasing popular among homeowners, mainly a result of policy support mechanisms helping to improve cost competiveness as compared to traditional fossil fuel generation. National government strategies to reduce electricity demand generated from fossil fuels and to meet European Union 20/20 targets is driving this change. However, the real performance of these technologies in a domestic setting is not often known as high time resolution models for domestic electricity load profiles are not readily available. As a result, projections in terms of reducing electricity demand and financial paybacks for these micro-generation …


Statistical Analysis Of Texas Holdem Poker, Daniel Bragonier 2010 California Polytechnic State University, San Luis Obispo

Statistical Analysis Of Texas Holdem Poker, Daniel Bragonier

Statistics

Gathered lifetime online Poker data for Mike Linn. Attempted to analyze data to obtain information to maximize profit. Techniques included Univariate Analysis, Regression analysis, Anova analysis, Logistic Regression, and outlier Analysis. After the analysis, nothing of supreme importance or sustenance was found. Encountered issues with too much power. Results lead to plenty of statistical significance, but little practical significance. Results showed that the data did not provide all the answers that were being sought after, but there was some value in examining the data in a strict statistical manner.


The 1905 Einstein Equation In A General Mathematical Analysis Model Of Quasars, Byron E. Bell 2010 DePaul University and Columbia College Chicago

The 1905 Einstein Equation In A General Mathematical Analysis Model Of Quasars, Byron E. Bell

Byron E. Bell

The 1905 wave equation of Albert Einstein is a model that can be used in many areas, such as physics, applied mathematics, statistics, quantum chaos and financial mathematics, etc. I will give a proof from the equation of A. Einstein’s paper “Zur Elektrodynamik bewegter Körper” it will be done by removing the variable time (t) and the constant (c) the speed of light from the above equation and look at the factors that affect the model in a real analysis framework. Testing the model with SDSS-DR5 Quasar Catalog (Schneider +, 2007). Keywords: direction cosine, apparent magnitudes of optical light; ultraviolet …


A New Screening Methodology For Mixture Experiments, Maria Weese 2010 University of Tennessee - Knoxville

A New Screening Methodology For Mixture Experiments, Maria Weese

Doctoral Dissertations

Many materials we use in daily life are comprised of a mixture; plastics, gasoline, food, medicine, etc. Mixture experiments, where factors are proportions of components and the response depends only on the relative proportions of the components, are an integral part of product development and improvement. However, when the number of components is large and there are complex constraints, experimentation can be a daunting task. We study screening methods in a mixture setting using the framework of the Cox mixture model [1]. We exploit the easy interpretation of the parameters in the Cox mixture model and develop methods for screening …


Fisher Was Right, Ronald C. Serlin 2010 University of Wisconsin - Madison

Fisher Was Right, Ronald C. Serlin

Journal of Modern Applied Statistical Methods

Invited address presented to the Educational Statistician’s Special Interest Group at the annual meeting of the American Educational Research Association, Denver, May 1, 2010.


Inferences About The Population Mean: Empirical Likelihood Versus Bootstrap-T, Rand R. Wilcox 2010 University of Southern California

Inferences About The Population Mean: Empirical Likelihood Versus Bootstrap-T, Rand R. Wilcox

Journal of Modern Applied Statistical Methods

The problem of making inferences about the population mean, μ, is considered. Known theoretical results suggest that a Bartlett corrected empirical likelihood method is preferable to two basic bootstrap techniques: a symmetric two-sided bootstrap-t and an equal-tailed bootstrap-t. However, simulations in this study indicate that, when the sample size is small, these two bootstrap methods are generally better in terms of Type I errors and probability coverage. As the sample size increases, situations are found where the Bartlett corrected empirical likelihood method performs better than the equal-tailed bootstrap-t, but the symmetric bootstrap-t gives the best results. None of the four …


The Influence Of Data Generation On Simulation Study Results: Tests Of Mean Differences, Tim Moses, Alan Klockars 2010 Educational Testing Service, Princeton, NJ

The Influence Of Data Generation On Simulation Study Results: Tests Of Mean Differences, Tim Moses, Alan Klockars

Journal of Modern Applied Statistical Methods

Type I error and power of the standard independent samples t-test were compared with the trimmed and Winsorized t-test with respect to continuous distributions and various discrete distributions known to occur in applied data. The continuous and discrete distributions were generated with similar levels of skew and kurtosis but the discrete distributions had a variety of structural features not reflected in the continuous distributions. The results showed that the Type I error rates of the t-tests were not seriously affected, but the power rate of the trimmed and Winsorized t-test varied greatly across the considered distributions.


The Effectiveness Of Stepwise Discriminant Analysis As A Post Hoc Procedure To A Significant Manova, Erik L. Heiny, Daniel J. Mundform 2010 Utah Valley University

The Effectiveness Of Stepwise Discriminant Analysis As A Post Hoc Procedure To A Significant Manova, Erik L. Heiny, Daniel J. Mundform

Journal of Modern Applied Statistical Methods

The effectiveness of SWDA as a post hoc procedure in a two-way MANOVA was examined using various numbers of dependent variables, sample sizes, effect sizes, correlation structures, and significance levels. The procedure did not work well in general except with small numbers of variables, larger samples and low correlations between variables.


The Small-Sample Efficiency Of Some Recently Proposed Multivariate Measures Of Location, Marie Ng, Rand R. Wilcox 2010 University of Hong Kong

The Small-Sample Efficiency Of Some Recently Proposed Multivariate Measures Of Location, Marie Ng, Rand R. Wilcox

Journal of Modern Applied Statistical Methods

Numerous multivariate robust measures of location have been proposed and many have been found to be unsatisfactory in terms of their small-sample efficiency. Several new measures of location have recently been derived, however, nothing is known about their small-sample efficiency or how they compare to the sample mean under normality. This research compared the efficiency for p = 2, 5, and 8 with sample sizes n = 20 and 50 for p-variate data. Although previous studies indicate that so-called skipped estimators are efficient, this study found that variations of this approach can perform poorly when n is small and p …


Model Based Vs. Model Independent Tests For Cross-Correlation, H.E.T. Holgersson, Peter S. Karlsson 2010 Jönköping International Business School, Sweden

Model Based Vs. Model Independent Tests For Cross-Correlation, H.E.T. Holgersson, Peter S. Karlsson

Journal of Modern Applied Statistical Methods

This article discusses the issue of whether cross correlation should be tested by model dependent or model independent methods. Several different tests are proposed and their main properties are investigated analytically and with simulations. It is argued that model independent tests should be used in applied work.


The Performance Of Multiple Imputation For Likert-Type Items With Missing Data, Walter Leite, S. Natasha Beretvas 2010 University of Florida

The Performance Of Multiple Imputation For Likert-Type Items With Missing Data, Walter Leite, S. Natasha Beretvas

Journal of Modern Applied Statistical Methods

The performance of multiple imputation (MI) for missing data in Likert-type items assuming multivariate normality was assessed using simulation methods. MI was robust to violations of continuity and normality. With 30% of missing data, MAR conditions resulted in negatively biased correlations. With 50% missingness, all results were negatively biased.


On Exact 100(1-Α)% Confidence Interval Of Autocorrelation Coefficient In Multivariate Data When The Errors Are Autocorrelated, Madhusudan Bhandary 2010 Columbus State University

On Exact 100(1-Α)% Confidence Interval Of Autocorrelation Coefficient In Multivariate Data When The Errors Are Autocorrelated, Madhusudan Bhandary

Journal of Modern Applied Statistical Methods

An exact 100(1−α)% confidence interval for the autocorrelation coefficient ρ is derived based on a single multinormal sample. The confidence interval is the interval between the two roots of a quadratic equation in ρ . A real life example is also presented.


Digital Commons powered by bepress