Open Access. Powered by Scholars. Published by Universities.®

Statistical Models Commons

Open Access. Powered by Scholars. Published by Universities.®

1,348 Full-Text Articles 2,001 Authors 853,222 Downloads 156 Institutions

All Articles in Statistical Models

Faceted Search

1,348 full-text articles. Page 52 of 52.

Participation And Engagement In Sport: A Double Hurdle Approach For The United Kingdom, Babatunde Buraimo, Brad Humphreys, Rob Simmons 2010 University of Central Lancashire

Participation And Engagement In Sport: A Double Hurdle Approach For The United Kingdom, Babatunde Buraimo, Brad Humphreys, Rob Simmons

Dr Babatunde Buraimo

This paper uses pooled cross-section data from four waves of the United Kingdom’s Taking Part Survey, 2005 to 2009, in order to investigate determinants of probability of participation and levels of engagement in sports. The two rival modelling approaches considered here are the double-hurdle approach and the Heckman sample selection model. The Heckman model proves to be deficient in several key respects. The double-hurdle approach offers more reliable estimates than the Heckman sample selection model, at least for this particular survey. The distinction is more than just statistical nuance as there are substantive differences in qualitative results from the two …


Author Guidelines For Reporting Scale Development And Validation Results In The Journal Of The Society For Social Work And Research, Peter Cabrera-Nguyen 2010 Washington University in St. Louis

Author Guidelines For Reporting Scale Development And Validation Results In The Journal Of The Society For Social Work And Research, Peter Cabrera-Nguyen

Elián P. Cabrera-Nguyen

In this invited article, Cabrera-Nguyen provides guidelines for reporting scale development and validation results. Authors' attention to these guidelines will help ensure the research reported in JSSWR is rigorous and of high quality. This article provides guidance for those using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). In addition, the article provides helpful links to resources addressing structural equation modeling, multiple imputation for missing data, and a general resource for quantitative data analysis.


Creation Of Synthetic Discrete Response Regression Models, Joseph Hilbe 2010 Arizona State University

Creation Of Synthetic Discrete Response Regression Models, Joseph Hilbe

Joseph M Hilbe

The development and use of synthetic regression models has proven to assist statisticians in better understanding bias in data, as well as how to best interpret various statistics associated with a modeling situation. In this article I present code that can be easily amended for the creation of synthetic binomial, count, and categorical response models. Parameters may be assigned to any number of predictors (which are shown as continuous, binary, or categorical), negative binomial heterogeneity parameters may be assigned, and the number of levels or cut points and values may be specified for ordered and unordered categorical response models. I …


Statistical Criteria For Selecting The Optimal Number Of Untreated Subjects Matched To Each Treated Subject When Using Many-To-One Matching On The Propensity Score, Peter C. Austin 2010 Institute for Clinical Evaluative Sciences

Statistical Criteria For Selecting The Optimal Number Of Untreated Subjects Matched To Each Treated Subject When Using Many-To-One Matching On The Propensity Score, Peter C. Austin

Peter Austin

Propensity-score matching is increasingly being used to estimate the effects of treatments using observational data. In many-to-one (M:1) matching on the propensity score, M untreated subjects are matched to each treated subject using the propensity score. The authors used Monte Carlo simulations to examine the effect of the choice of M on the statistical performance of matched estimators. They considered matching 1–5 untreated subjects to each treated subject using both nearest-neighbor matching and caliper matching in 96 different scenarios. Increasing the number of untreated subjects matched to each treated subject tended to increase the bias in the estimated treatment effect; …


The Performance Of Different Propensity-Score Methods For Estimating Differences In Proportions (Risk Differences Or Absolute Risk Reductions) In Observational Studies, Peter C. Austin 2010 Institute for Clinical Evaluative Sciences

The Performance Of Different Propensity-Score Methods For Estimating Differences In Proportions (Risk Differences Or Absolute Risk Reductions) In Observational Studies, Peter C. Austin

Peter Austin

Propensity score methods are increasingly being used to estimate the effects of treatments on health outcomes using observational data. There are four methods for using the propensity score to estimate treatment effects: covariate adjustment using the propensity score, stratification on the propensity score, propensity-score matching, and inverse probability of treatment weighting (IPTW) using the propensity score. When outcomes are binary, the effect of treatment on the outcome can be described using odds ratios, relative risks, risk differences, or the number needed to treat. Several clinical commentators suggested that risk differences and numbers needed to treat are more meaningful for clinical …


Economic Risk Assessment Using The Fractal Market Hypothesis, Jonathan Blackledge, Marek Rebow 2010 Technological University Dublin

Economic Risk Assessment Using The Fractal Market Hypothesis, Jonathan Blackledge, Marek Rebow

Conference papers

This paper considers the Fractal Market Hypothesi (FMH) for assessing the risk(s) in developing a financial portfolio based on data that is available through the Internet from an increasing number of sources. Most financial risk management systems are still based on the Efficient Market Hypothesis which often fails due to the inaccuracies of the statistical models that underpin the hypothesis, in particular, that financial data are based on stationary Gaussian processes. The FMH considered in this paper assumes that financial data are non-stationary and statistically self-affine so that a risk analysis can, in principal, be applied at any time scale …


Encryption Using Deterministic Chaos, Jonathan Blackledge, Nikolai Ptitsyn 2010 Technological University Dublin

Encryption Using Deterministic Chaos, Jonathan Blackledge, Nikolai Ptitsyn

Articles

The concepts of randomness, unpredictability, complexity and entropy form the basis of modern cryptography and a cryptosystem can be interpreted as the design of a key-dependent bijective transformation that is unpredictable to an observer for a given computational resource. For any cryptosystem, including a Pseudo-Random Number Generator (PRNG), encryption algorithm or a key exchange scheme, for example, a cryptanalyst has access to the time series of a dynamic system and knows the PRNG function (the algorithm that is assumed to be based on some iterative process) which is taken to be in the public domain by virtue of the Kerchhoff-Shannon …


A New Perspective On Visual Word Processing Efficiency, Joseph W. Houpt, James T. Townsend 2010 Wright State University - Main Campus

A New Perspective On Visual Word Processing Efficiency, Joseph W. Houpt, James T. Townsend

Psychology Faculty Publications

As a fundamental part of our daily lives, visual word processing has received much attention in the psychological literature. Despite the well established perceptual advantages of word and pseudoword context using accuracy, a comparable effect using response times has been elusive. Some researchers continue to question whether the advantage due to word context is perceptual. We use the capacity coefficient, a well established, response time based measure of efficiency to provide evidence of word processing as a particularly efficient perceptual process to complement those results from the accuracy domain.


Probability Models For Blackjack Poker, Charlie H. Cooke 2010 Old Dominion University

Probability Models For Blackjack Poker, Charlie H. Cooke

Mathematics & Statistics Faculty Publications

For simplicity in calculation, previous analyses of blackjack poker have employed models which employ sampling with replacement. in order to assess what degree of error this may induce, the purpose here is to calculate results for a typical hand where sampling without replacement is employed. It is seen that significant error can result when long runs are required to complete the hand. The hand examined is itself of particular interest, as regards both its outstanding expectations of high yield and certain implications for pair splitting of two nines against the dealer's seven. Theoretical and experimental methods are used in order …


The Joint Distribution Of Bivariate Exponential Under Linearly Related Model, Norou Diawara, Kumer Pial Das 2010 Old Dominion University

The Joint Distribution Of Bivariate Exponential Under Linearly Related Model, Norou Diawara, Kumer Pial Das

Mathematics & Statistics Faculty Publications

In this paper, fundamental results of the joint distribution of the bivariate exponential distributions are established. The positive support multivariate distribution theory is important in reliability and survival analysis, and we applied it to the case where more than one failure or survival is observed in a given study. Usually, the multivariate distribution is restricted to those with marginal distributions of a specified and familiar lifetime family. The family of exponential distribution contains the absolutely continuous and discrete case models with a nonzero probability on a set of measure zero. Examples are given, and estimators are developed and applied to …


Linear Dependency For The Difference In Exponential Regression, Indika Sathish, Norou Diawara 2010 Old Dominion University

Linear Dependency For The Difference In Exponential Regression, Indika Sathish, Norou Diawara

Mathematics & Statistics Faculty Publications

In the field of reliability, a lot has been written on the analysis of phenomena that are related. Estimation of the difference of two population means have been mostly formulated under the no-correlation assumption. However, in many situations, there is a correlation involved. This paper addresses this issue. A sequential estimation method for linearly related lifetime distributions is presented. Estimations for the scale parameters of the exponential distribution are given under square error loss using a sequential prediction method. Optimal stopping rules are discussed using concepts of mean criteria, and numerical results are presented.


Fast Function-On-Scalar Regression With Penalized Basis Expansions, Philip T. Reiss, Lei Huang, Maarten Mennes 2009 New York University

Fast Function-On-Scalar Regression With Penalized Basis Expansions, Philip T. Reiss, Lei Huang, Maarten Mennes

Lei Huang

Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals …


The 1905 Einstein Equation In A General Mathematical Analysis Model Of Quasars, Byron E. Bell 2009 DePaul University and Columbia College Chicago

The 1905 Einstein Equation In A General Mathematical Analysis Model Of Quasars, Byron E. Bell

Byron E. Bell

No abstract provided.


Bayesian Inference For A Periodic Stochastic Volatility Model Of Intraday Electricity Prices, Michael S. Smith 2009 Melbourne Business School

Bayesian Inference For A Periodic Stochastic Volatility Model Of Intraday Electricity Prices, Michael S. Smith

Michael Stanley Smith

The Gaussian stochastic volatility model is extended to allow for periodic autoregressions (PAR) in both the level and log-volatility process. Each PAR is represented as a first order vector autoregression for a longitudinal vector of length equal to the period. The periodic stochastic volatility model is therefore expressed as a multivariate stochastic volatility model. Bayesian posterior inference is computed using a Markov chain Monte Carlo scheme for the multivariate representation. A circular prior that exploits the periodicity is suggested for the log-variance of the log-volatilities. The approach is applied to estimate a periodic stochastic volatility model for half-hourly electricity prices …


Bayesian Skew Selection For Multivariate Models, Michael S. Smith, Anastasios Panagiotelis 2009 Melbourne Business School

Bayesian Skew Selection For Multivariate Models, Michael S. Smith, Anastasios Panagiotelis

Michael Stanley Smith

We develop a Bayesian approach for the selection of skew in multivariate skew t distributions constructed through hidden conditioning in the manners suggested by either Azzalini and Capitanio (2003) or Sahu, Dey and Branco~(2003). We show that the skew coefficients for each margin are the same for the standardized versions of both distributions. We introduce binary indicators to denote whether there is symmetry, or skew, in each dimension. We adopt a proper beta prior on each non-zero skew coefficient, and derive the corresponding prior on the skew parameters. In both distributions we show that as the degrees of freedom increases, …


Fast Function-On-Scalar Regression With Penalized Basis Expansions, Philip T. Reiss, Lei Huang, Maarten Mennes 2009 New York University

Fast Function-On-Scalar Regression With Penalized Basis Expansions, Philip T. Reiss, Lei Huang, Maarten Mennes

Philip T. Reiss

Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals …


Digital Commons powered by bepress