Open Access. Powered by Scholars. Published by Universities.®
Social and Behavioral Sciences Commons™
Open Access. Powered by Scholars. Published by Universities.®
Articles 1 - 9 of 9
Full-Text Articles in Social and Behavioral Sciences
Jmasm 52: Extremely Efficient Permutation And Bootstrap Hypothesis Tests Using R, Christina Chatzipantsiou, Marios Dimitriadis, Manos Papadakis, Michail Tsagris
Jmasm 52: Extremely Efficient Permutation And Bootstrap Hypothesis Tests Using R, Christina Chatzipantsiou, Marios Dimitriadis, Manos Papadakis, Michail Tsagris
Journal of Modern Applied Statistical Methods
Re-sampling based statistical tests are known to be computationally heavy, but reliable when small sample sizes are available. Despite their nice theoretical properties not much effort has been put to make them efficient. Computationally efficient method for calculating permutation-based p-values for the Pearson correlation coefficient and two independent samples t-test are proposed. The method is general and can be applied to other similar two sample mean or two mean vectors cases.
Some Remarks On Rao And Lovric’S ‘Testing Point Null Hypothesis Of A Normal Mean And The Truth: 21st Century Perspective’, Bruno D. Zumbo, Edward Kroc
Some Remarks On Rao And Lovric’S ‘Testing Point Null Hypothesis Of A Normal Mean And The Truth: 21st Century Perspective’, Bruno D. Zumbo, Edward Kroc
Journal of Modern Applied Statistical Methods
Although we have much to agree with in Rao and Lovric’s important discussion of the test of point null hypotheses, it stirred us to provide a way out of their apparent Zero probability paradox and cast the Hodges-Lehmann paradigm from a Serlin-Lapsley approach. We close our remarks with an eye toward a broad perspective.
A Monte Carlo Simulation Of The Robust Rank-Order Test Under Various Population Symmetry Conditions, William T. Mickelson
A Monte Carlo Simulation Of The Robust Rank-Order Test Under Various Population Symmetry Conditions, William T. Mickelson
Journal of Modern Applied Statistical Methods
The Type I Error Rate of the Robust Rank Order test under various population symmetry conditions is explored through Monte Carlo simulation. Findings indicate the test has difficulty controlling Type I error under generalized Behrens-Fisher conditions for moderately sized samples.
The Not-So-Quiet Revolution: Cautionary Comments On The Rejection Of Hypothesis Testing In Favor Of A “Causal” Modeling Alternative, Daniel H. Robinson, Joel R. Levin
The Not-So-Quiet Revolution: Cautionary Comments On The Rejection Of Hypothesis Testing In Favor Of A “Causal” Modeling Alternative, Daniel H. Robinson, Joel R. Levin
Journal of Modern Applied Statistical Methods
Rodgers (2010) recently applauded a revolution involving the increased use of statistical modeling techniques. It is argued that such use may have a downside, citing empirical evidence in educational psychology that modeling techniques are often applied in cross-sectional, correlational studies to produce unjustified causal conclusions and prescriptive statements.
A New Approximate Bayesian Approach For Decision Making About The Variance Of A Gaussian Distribution Versus The Classical Approach, Vincent A. R. Camara
A New Approximate Bayesian Approach For Decision Making About The Variance Of A Gaussian Distribution Versus The Classical Approach, Vincent A. R. Camara
Journal of Modern Applied Statistical Methods
Rules of decision-making about the variance of a Gaussian distribution are obtained and compared. Considering the square error loss function, an approximate Bayesian decision rule for the variance of a normal population is derived. Using normal data and SAS software, the obtained approximate Bayesian test results were compared to their counterparts obtained with the well-known classical decision rule. It is shown that the proposed approximate Bayesian decision rule relies only on observations. The classical decision rule, which uses the Chi-square statistic, does not always yield the best results: the proposed approach often performs better.
Statistical Tests, Tests Of Significance, And Tests Of A Hypothesis Using Excel, David A. Heiser
Statistical Tests, Tests Of Significance, And Tests Of A Hypothesis Using Excel, David A. Heiser
Journal of Modern Applied Statistical Methods
Microsoft’s spreadsheet program Excel has many statistical functions and routines. Over the years there have been criticisms about the inaccuracies of these functions and routines (see McCullough 1998, 1999). This article reviews some of these statistical methods used to test for differences between two samples. In practice, the analysis is done by a software program and often with the actual method used unknown. The user has to select the method and variations to be used, without full knowledge of just what calculations are used. Usually there is no convenient trace back to textbook explanations. This article describes the Excel algorithm …
Deconstructing Arguments From The Case Against Hypothesis Testing, Shlomo S. Sawilowsky
Deconstructing Arguments From The Case Against Hypothesis Testing, Shlomo S. Sawilowsky
Journal of Modern Applied Statistical Methods
The main purpose of this article is to contest the propositions that (1) hypothesis tests should be abandoned in favor of confidence intervals, and (2) science has not benefited from hypothesis testing. The minor purpose is to propose (1) descriptive statistics, graphics, and effect sizes do not obviate the need for hypothesis testing, (2) significance testing (reporting p values and leaving it to the reader to determine significance) is subjective and outside the realm of the scientific method, and (3) Bayesian and qualitative methods should be used for Bayesian and qualitative research studies, respectively.
The Trouble With Interpreting Statistically Nonsignificant Effect Sizes In Single-Study Investigations, Joel R. Levin, Daniel H. Robinson
The Trouble With Interpreting Statistically Nonsignificant Effect Sizes In Single-Study Investigations, Joel R. Levin, Daniel H. Robinson
Journal of Modern Applied Statistical Methods
In this commentary, we offer a perspective on the problem of authors reporting and interpreting effect sizes in the absence of formal statistical tests of their chanceness. The perspective reinforces our previous distinction between single-study investigations and multiple-study syntheses.
Two-Sided Equivalence Testing Of The Difference Between Two Means, R. Clifford Blair, Stephen R. Cole
Two-Sided Equivalence Testing Of The Difference Between Two Means, R. Clifford Blair, Stephen R. Cole
Journal of Modern Applied Statistical Methods
Studies designed to examine the equivalence of treatments are increasingly common in social and biomedical research. Herein, we outline the rationale and some nuances underlying equivalence testing of the difference between two means. Specifically, we note the odd relation between tests of hypothesis and confidence intervals in the equivalence setting.