Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 6 of 6

Full-Text Articles in Physical Sciences and Mathematics

On The Authentic Notion, Relevance, And Solution Of The Jeffreys-Lindley Paradox In The Zettabyte Era, Miodrag M. Lovric Apr 2020

On The Authentic Notion, Relevance, And Solution Of The Jeffreys-Lindley Paradox In The Zettabyte Era, Miodrag M. Lovric

Journal of Modern Applied Statistical Methods

The Jeffreys-Lindley paradox is the most quoted divergence between the frequentist and Bayesian approaches to statistical inference. It is embedded in the very foundations of statistics and divides frequentist and Bayesian inference in an irreconcilable way. This paradox is the Gordian Knot of statistical inference and Data Science in the Zettabyte Era. If statistical science is ready for revolution confronted by the challenges of massive data sets analysis, the first step is to finally solve this anomaly. For more than sixty years, the Jeffreys-Lindley paradox has been under active discussion and debate. Many solutions have been proposed, none entirely satisfactory. …


Development And Properties Of Kernel-Based Methods For The Interpretation And Presentation Of Forensic Evidence, Douglas Armstrong Jan 2017

Development And Properties Of Kernel-Based Methods For The Interpretation And Presentation Of Forensic Evidence, Douglas Armstrong

Electronic Theses and Dissertations

The inference of the source of forensic evidence is related to model selection. Many forms of evidence can only be represented by complex, high-dimensional random vectors and cannot be assigned a likelihood structure. A common approach to circumvent this is to measure the similarity between pairs of objects composing the evidence. Such methods are ad-hoc and unstable approaches to the judicial inference process. While these methods address the dimensionality issue they also engender dependencies between scores when 2 scores have 1 object in common that are not taken into account in these models. The model developed in this research captures …


The Bayes Factor For Case-Control Studies With Misclassified Data, Tzesan Lee Nov 2015

The Bayes Factor For Case-Control Studies With Misclassified Data, Tzesan Lee

Journal of Modern Applied Statistical Methods

The question of how to test if collected data for a case-control study are misclassified was investigated. A mixed approach was employed to calculate the Bayes factor to assess the validity of the null hypothesis of no-misclassification. A real-world data set on the association between lung cancer and smoking status was used as an example to illustrate the proposed method.


Estimating The Integrated Likelihood Via Posterior Simulation Using The Harmonic Mean Identity, Adrian E. Raftery, Michael A. Newton, Jaya M. Satagopan, Pavel N. Krivitsky Apr 2006

Estimating The Integrated Likelihood Via Posterior Simulation Using The Harmonic Mean Identity, Adrian E. Raftery, Michael A. Newton, Jaya M. Satagopan, Pavel N. Krivitsky

Memorial Sloan-Kettering Cancer Center, Dept. of Epidemiology & Biostatistics Working Paper Series

The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison and Bayesian testing is a ratio of integrated likelihoods, and the model weights in Bayesian model averaging are proportional to the integrated likelihoods. We consider the estimation of the integrated likelihood from posterior simulation output, aiming at a generic method that uses only the likelihoods from the posterior simulation iterations. The key is the …


A Bayesian Subset Analysis Of Sensory Evaluation Data, Balgobin Nandram Nov 2005

A Bayesian Subset Analysis Of Sensory Evaluation Data, Balgobin Nandram

Journal of Modern Applied Statistical Methods

In social sciences it is easy to carry out sensory experiments using say a J-point hedonic scale. One major problem with the J-point hedonic scale is that a conversion from the category scales to numeric scores might not be sensible because the panelists generally view increments on the hedonic scale as psychologically unequal. In the current problem several products are rated by a set of panelists on the J-point hedonic scale. One objective is to select the best subset of products and to assess the quality of the products by estimating the mean and standard deviation response …


The Bayesian Two-Sample T-Test, Mithat Gonen, Wesley O. Johnson, Yonggang Lu, Peter H. Westfall Apr 2005

The Bayesian Two-Sample T-Test, Mithat Gonen, Wesley O. Johnson, Yonggang Lu, Peter H. Westfall

Memorial Sloan-Kettering Cancer Center, Dept. of Epidemiology & Biostatistics Working Paper Series

In this article we show how the pooled-variance two-sample t-statistic arises from a Bayesian formulation of the two-sided point null testing problem, with emphasis on teaching. We identify a reasonable and useful prior giving a closed-form Bayes factor that can be written in terms of the distribution of the two-sample t-statistic under the null and alternative hypotheses respectively. This provides a Bayesian motivation for the two-sample t-statistic, which has heretofore been buried as a special case of more complex linear models, or given only roughly via analytic or Monte Carlo approximations. The resulting formulation of the Bayesian test is easy …