Open Access. Powered by Scholars. Published by Universities.®
- Discipline
-
- Applied Statistics (15)
- Biostatistics (14)
- Statistical Models (13)
- Multivariate Analysis (6)
- Life Sciences (5)
-
- Probability (5)
- Social and Behavioral Sciences (5)
- Statistical Theory (5)
- Education (4)
- Genetics and Genomics (4)
- Longitudinal Data Analysis and Time Series (4)
- Other Statistics and Probability (4)
- Clinical Trials (3)
- Medicine and Health Sciences (3)
- Sociology (3)
- Survival Analysis (3)
- Bioinformatics (2)
- Business (2)
- Computational Biology (2)
- Computer Sciences (2)
- Criminology (2)
- Educational Assessment, Evaluation, and Research (2)
- Genomics (2)
- Geographic Information Sciences (2)
- Geography (2)
- Mathematics (2)
- Medical Sciences (2)
- Institution
- Keyword
-
- Morgridge College of Education (8)
- Research Methods and Information Science (8)
- Research Methods and Statistics (8)
- Causal inference (3)
- ETD (3)
-
- Statistics (3)
- Bias (2)
- Breast cancer (2)
- Missing data (2)
- Propensity scores (2)
- Survival Analysis (2)
- AFT Models (1)
- ATE (1)
- American Lobster (1)
- Antibiotic overuse (1)
- Atlantic Cod (1)
- Average loss (1)
- Average treatment effect (1)
- Awareness (1)
- Bayesian (1)
- Bayesian analysis (1)
- Bayesian estimation (1)
- Bayesian statistics (1)
- Binning. (1)
- Biomarker (1)
- Bootstrap (1)
- Bootstrap confidence interval (1)
- Bootstrap resampling. (1)
- Bootstrapping (1)
- Bycatch (1)
Articles 31 - 39 of 39
Full-Text Articles in Statistical Methodology
Propensity Score Methods : A Simulation And Case Study Involving Breast Cancer Patients., John Craycroft
Propensity Score Methods : A Simulation And Case Study Involving Breast Cancer Patients., John Craycroft
Electronic Theses and Dissertations
Observational data presents unique challenges for analysis that are not encountered with experimental data resulting from carefully designed randomized controlled trials. Selection bias and unbalanced treatment assignments can obscure estimations of treatment effects, making the process of causal inference from observational data highly problematic. In 1983, Paul Rosenbaum and Donald Rubin formalized an approach for analyzing observational data that adjusts treatment effect estimates for the set of non-treatment variables that are measured at baseline. The propensity score is the conditional probability of assignment to a treatment group given the covariates. Using this score, one may balance the covariates across treatment …
Missing Data In Clinical Trial: A Critical Look At The Proportionality Of Mnar And Mar Assumptions For Multiple Imputation, Theophile B. Dipita
Missing Data In Clinical Trial: A Critical Look At The Proportionality Of Mnar And Mar Assumptions For Multiple Imputation, Theophile B. Dipita
Electronic Theses and Dissertations
Randomized control trial is a gold standard of research studies. Randomization helps reduce bias and infer causality. One constraint of these studies is that it depends on participants to obtain the desired data. Whatever the researcher can do, there is a possibility to end up with incomplete data. The problem is more relevant in clinical trials when missing data can be related to the condition under study. The benefits of randomization is compromised by missing data. Multiple imputation is a valid method of treating missing data under the assumption of MAR. Unfortunately this is an unverified assumptions. Current practice advise …
Exponentially Weighted Moving Average Charts For Monitoring The Process Generalized Variance, Anna Khamitova
Exponentially Weighted Moving Average Charts For Monitoring The Process Generalized Variance, Anna Khamitova
Electronic Theses and Dissertations
The exponentially weighted moving average chart based on the sample generalized variance is studied under the independent multivariate normal model for the vector of quality measurements. The performance of the chart is based on an analysis of the chart's initial and steady-state run length distributions. The three methods that are commonly used to determinate run length distribution, simulation, the integral equation method, and the Markov chain approximation are discussed. The integral equation and Markov chain approaches are analytical methods that require a nu- merical method for determining the probability density and cumulative distribution functions describing the distribution of the sample …
Income Inequality Measures And Statistical Properties Of Weighted Burr-Type And Related Distributions, Meznah R. Al Buqami
Income Inequality Measures And Statistical Properties Of Weighted Burr-Type And Related Distributions, Meznah R. Al Buqami
Electronic Theses and Dissertations
In this thesis, tail conditional expectation (TCE) in risk analysis, an important measure for right-tail risk, is presented. This value is generally based on the quantile of the loss distribution. Explicit formulas of several tail conditional expectations and inequality measures for Dagum-type models are derived. In addition, a new class of weighted Burr-III (WBIII) distribution is presented. The statistical properties of this distribution including hazard and reverse hazard functions, moments, coefficient of variation, skewness, and kurtosis, inequality measures, entropy are derived. Also, Fisher information and maximum likelihood estimates of the model parameters are obtained.
Finding A Better Confidence Interval For A Single Regression Changepoint Using Different Bootstrap Confidence Interval Procedures, Bodhipaksha Thilakarathne
Finding A Better Confidence Interval For A Single Regression Changepoint Using Different Bootstrap Confidence Interval Procedures, Bodhipaksha Thilakarathne
Electronic Theses and Dissertations
Recently a number of papers have been published in the area of regression changepoints but there is not much literature concerning confidence intervals for regression changepoints. The purpose of this paper is to find a better bootstrap confidence interval for a single regression changepoint. ("Better" confidence interval means having a minimum length and coverage probability which is close to a chosen significance level). Several methods will be used to find bootstrap confidence intervals. Among those methods a better confidence interval will be presented.
Power Analysis For Alternative Tests For The Equality Of Means., Haiyin Li
Power Analysis For Alternative Tests For The Equality Of Means., Haiyin Li
Electronic Theses and Dissertations
The two sample t-test is the test usually taught in introductory statistics courses to test for the equality of means of two populations. However, the t-test is not the only test available to compare the means of two populations. The randomization test is being incorporated into some introductory courses. There is also the bootstrap test. It is also not uncommon to decide the equality of the means based on confidence intervals for the means of these two populations. Are all those methods equally powerful? Can the idea of non-overlapping t confidence intervals be extended to bootstrap confidence intervals? The powers …
Modeling The Progression Of Discrete Paired Longitudinal Data., Jonathan Wesley Hicks
Modeling The Progression Of Discrete Paired Longitudinal Data., Jonathan Wesley Hicks
Electronic Theses and Dissertations
It is our intention to derive a methodology for which to model discrete paired longitudinal data. Through the use of transition matrices and maximum likelihood estimation techniques by means of software, we develop a way to model the progression of such data. We provide an example by applying this method to the Wisconsin Epidemiological Study of Diabetic Retinopathy data set. The data set is comprised of individuals, all diabetics, who have had their eyes examined for diabetic retinopathy. The eyes are treated as paired data, and we have the results of the examination at the four unequally spaced time points …
Confidence Intervals For Population Size In A Capture-Recapture Problem., Xiao Zhang
Confidence Intervals For Population Size In A Capture-Recapture Problem., Xiao Zhang
Electronic Theses and Dissertations
In a single capture-recapture problem, two new Wilson methods for interval estimation of population size are derived. Classical Chapman interval, Wilson and Wilson-cc intervals are examined and compared in terms of their expected interval width and exact coverage properties in two models. The new approach performs better than the Chapman in each model. Bayesian analysis also gives a different way to estimate population size.
Estimation Of Standardized Mortality Ratio In Epidemiological Studies, Bingxia Wang
Estimation Of Standardized Mortality Ratio In Epidemiological Studies, Bingxia Wang
Electronic Theses and Dissertations
In epidemiological studies, we are often interested in comparing the mortality rate of a certain cohort to that of a standard population. A standard computational statistic in this regard is the Standardized Mortality Ratio (SMR) @reslow and Day, 1987), given by where 0 is the number of deaths observed in the study cohort from a specified cause, E is the expected number calculated from that population. In occupational epidemiology, the SMR is the most common measure of risk. It is a comparative statistic. It is frequently based on a comparison of the number0 in the cohort with the expected value …