Open Access. Powered by Scholars. Published by Universities.®
- Institution
- Keyword
-
- Analytics (1)
- Bayesian Linear Model (1)
- Bias correction (1)
- COVID-19 (1)
- Canonical GLM (1)
-
- Coronavirus (1)
- Differential expression (1)
- Differential item functioning (1)
- GLMM (1)
- Gene expression (1)
- Gene set analysis (1)
- Gibbs Sampler (1)
- HMLE (1)
- Hierarchical likelihood (1)
- Item response theory (1)
- Lasso Regression (1)
- Latent class (1)
- Machine Learning (1)
- Math (1)
- Measurement invariance (1)
- Microarrays (1)
- Morgridge College of Education (1)
- NBA (1)
- Partially accelerated life test; Maximum likelihood estimation; Bias Corrected Confidence Interval (Boot-BC); Accelerated Bias Corrected Confidence Interval (Boot-BCa); MCMC approach. (1)
- Personnel psychology; industrial-organizational psychology; employee selection; SEM; true score; observed score (1)
- Predictive Analytics (1)
- Probability (1)
- QTL (1)
- Rasch mixture model (1)
- Research Methods and Information Science (1)
- Publication
- Publication Type
Articles 1 - 7 of 7
Full-Text Articles in Statistical Models
Multi-Level Small Area Estimation Based On Calibrated Hierarchical Likelihood Approach Through Bias Correction With Applications To Covid-19 Data, Nirosha Rathnayake
Multi-Level Small Area Estimation Based On Calibrated Hierarchical Likelihood Approach Through Bias Correction With Applications To Covid-19 Data, Nirosha Rathnayake
Theses & Dissertations
Small area estimation (SAE) has been widely used in a variety of applications to draw estimates in geographic domains represented as a metropolitan area, district, county, or state. The direct estimation methods provide accurate estimates when the sample size of study participants within each area unit is sufficiently large, but it might not always be realistic to have large sample sizes of study participants when considering small geographical regions. Meanwhile, high dimensional socio-ecological data exist at the community level, providing an opportunity for model-based estimation by incorporating rich auxiliary information at the individual and area levels. Thus, it is critical …
Statistical Approaches Of Gene Set Analysis With Quantitative Trait Loci For High-Throughput Genomic Studies., Samarendra Das
Statistical Approaches Of Gene Set Analysis With Quantitative Trait Loci For High-Throughput Genomic Studies., Samarendra Das
Electronic Theses and Dissertations
Recently, gene set analysis has become the first choice for gaining insights into the underlying complex biology of diseases through high-throughput genomic studies, such as Microarrays, bulk RNA-Sequencing, single cell RNA-Sequencing, etc. It also reduces the complexity of statistical analysis and enhances the explanatory power of the obtained results. Further, the statistical structure and steps common to these approaches have not yet been comprehensively discussed, which limits their utility. Hence, a comprehensive overview of the available gene set analysis approaches used for different high-throughput genomic studies is provided. The analysis of gene sets is usually carried out based on …
Applying The Data: Predictive Analytics In Sport, Anthony Teeter, Margo Bergman
Applying The Data: Predictive Analytics In Sport, Anthony Teeter, Margo Bergman
Access*: Interdisciplinary Journal of Student Research and Scholarship
The history of wagering predictions and their impact on wide reaching disciplines such as statistics and economics dates to at least the 1700’s, if not before. Predicting the outcomes of sports is a multibillion-dollar business that capitalizes on these tools but is in constant development with the addition of big data analytics methods. Sportsline.com, a popular website for fantasy sports leagues, provides odds predictions in multiple sports, produces proprietary computer models of both winning and losing teams, and provides specific point estimates. To test likely candidates for inclusion in these prediction algorithms, the authors developed a computer model, and test …
A Monte Carlo Analysis Of Standard Error-Based Methods For Computing Confidence Intervals, Elayna Wichert
A Monte Carlo Analysis Of Standard Error-Based Methods For Computing Confidence Intervals, Elayna Wichert
Masters Theses & Specialist Projects
The objective of this study is to empirically test existing techniques to calculate the likely range of values for a Classical Test Theory true score given an observed score. The traditional method for forming these confidence intervals has used the standard error of measurement (SEM) as the basis for this confidence interval. An alternate equation, the standard error of estimate (SEE), has been recommended in place of the SEM for this purpose, yet it remains overlooked in the field of psychometrics. It is important that the correct equation be used in various applications in personnel psychology. Monte Carlo analyses were …
Inferences For Weibull-Gamma Distribution In Presence Of Partially Accelerated Life Test, Mahmoud Mansour, M A W Mahmoud Prof., Rashad El-Sagheer
Inferences For Weibull-Gamma Distribution In Presence Of Partially Accelerated Life Test, Mahmoud Mansour, M A W Mahmoud Prof., Rashad El-Sagheer
Basic Science Engineering
In this paper, the point at issue is to deliberate point and interval estimations for the parameters of Weibull-Gamma distribution (WGD) using progressively Type-II censored (PROG-II-C) sample under step stress partially accelerated life test (SSPALT) model. The maximum likelihood (ML), Bayes, and four parametric bootstrap methods are used to obtain the point estimations for the distribution parameters and the acceleration factor. Furthermore, the approximate confidence intervals (ACIs), four bootstrap confidence intervals and credible intervals of the estimators have been gotten. The results of Bayes estimators are computed under the squared error loss (SEL) function using Markov Chain Monte Carlo (MCMC) …
Assessing Robustness Of The Rasch Mixture Model To Detect Differential Item Functioning - A Monte Carlo Simulation Study, Jinjin Huang
Assessing Robustness Of The Rasch Mixture Model To Detect Differential Item Functioning - A Monte Carlo Simulation Study, Jinjin Huang
Electronic Theses and Dissertations
Measurement invariance is crucial for an effective and valid measure of a construct. Invariance holds when the latent trait varies consistently across subgroups; in other words, the mean differences among subgroups are only due to true latent ability differences. Differential item functioning (DIF) occurs when measurement invariance is violated. There are two kinds of traditional tools for DIF detection: non-parametric methods and parametric methods. Mantel Haenszel (MH), SIBTEST, and standardization are examples of non-parametric DIF detection methods. The majority of parametric DIF detection methods are item response theory (IRT) based. Both non-parametric methods and parametric methods compare differences among subgroups …
How Machine Learning And Probability Concepts Can Improve Nba Player Evaluation, Harrison Miller
How Machine Learning And Probability Concepts Can Improve Nba Player Evaluation, Harrison Miller
CMC Senior Theses
In this paper I will be breaking down a scholarly article, written by Sameer K. Deshpande and Shane T. Jensen, that proposed a new method to evaluate NBA players. The NBA is the highest level professional basketball league in America and stands for the National Basketball Association. They proposed to build a model that would result in how NBA players impact their teams chances of winning a game, using machine learning and probability concepts. I preface that by diving into these concepts and their mathematical backgrounds. These concepts include building a linear model using ordinary least squares method, the bias …