Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 20 of 20

Full-Text Articles in Physical Sciences and Mathematics

High-Dimensional Variable Selection Via Knockoffs Using Gradient Boosting, Amr Essam Mohamed Apr 2023

High-Dimensional Variable Selection Via Knockoffs Using Gradient Boosting, Amr Essam Mohamed

Dissertations

As data continue to grow rapidly in size and complexity, efficient and effective statistical methods are needed to detect the important variables/features. Variable selection is one of the most crucial problems in statistical applications. This problem arises when one wants to model the relationship between the response and the predictors. The goal is to reduce the number of variables to a minimal set of explanatory variables that are truly associated with the response of interest to improve the model accuracy. Effectively choosing the true influential variables and controlling the False Discovery Rate (FDR) without sacrificing power has been a challenge …


Sparse Partitioned Empirical Bayes Ecm Algorithms For High-Dimensional Linear Mixed Effects And Heteroscedastic Regression, Anja Zgodic Apr 2023

Sparse Partitioned Empirical Bayes Ecm Algorithms For High-Dimensional Linear Mixed Effects And Heteroscedastic Regression, Anja Zgodic

Theses and Dissertations

Variable selection methods in both the frequentist and Bayesian frameworks are powerful techniques that provide prediction and inference in high-dimensional linear regression models. These methods often assume independence between observations and normally distributed errors with the same variance. In practice, these two assumptions are often violated. To mitigate this, we develop efficient and powerful Bayesian approaches for linear mixed modeling and heteroscedastic linear regression. These method offers increased flexibility through the development of empirical Bayes estimators for hyperparameters, with computationally efficient estimation through the Expectation Conditional-Minimization (ECM) algorithm. The novelty of these approaches lies in the partitioning and parameter expansion, …


Multiple Frailty Model For Spatially Correlated Interval-Censored, Wanfang Zhang Oct 2021

Multiple Frailty Model For Spatially Correlated Interval-Censored, Wanfang Zhang

Theses and Dissertations

In this paper, we consider the problem of multiple frailty selection for general interval-censored spatial survival data, which often occurs in clinical trials and epidemiological studies. The general interval-censored data is a mixture of left-, right- and interval-censored data. We propose a Bayesian semiparametric approach based on the Cox proportional hazard model, where monotone splines were used for non-parametrical modeling of the cumulative baseline hazards where the variable selection priors were used for frailty selection. A two-stage data augmentation with Poisson latent variables is developed for efficient computation. The approach is evaluated based a simulation study and illustrated using a …


Bayesian Variable Selection Strategies In Longitudinal Mixture Models And Categorical Regression Problems., Md Nazir Uddin Aug 2021

Bayesian Variable Selection Strategies In Longitudinal Mixture Models And Categorical Regression Problems., Md Nazir Uddin

Electronic Theses and Dissertations

In this work, we seek to develop a variable screening and selection method for Bayesian mixture models with longitudinal data. To develop this method, we consider data from the Health and Retirement Survey (HRS) conducted by University of Michigan. Considering yearly out-of-pocket expenditures as the longitudinal response variable, we consider a Bayesian mixture model with $K$ components. The data consist of a large collection of demographic, financial, and health-related baseline characteristics, and we wish to find a subset of these that impact cluster membership. An initial mixture model without any cluster-level predictors is fit to the data through an MCMC …


"A Comparison Of Variable Selection Methods Using Bootstrap Samples From Environmental Metal Mixture Data", Paul-Yvann Djamen Jul 2020

"A Comparison Of Variable Selection Methods Using Bootstrap Samples From Environmental Metal Mixture Data", Paul-Yvann Djamen

Mathematics & Statistics ETDs

In this thesis, I studied a newly developed variable selection method SODA, and three customarily used variable selection methods: LASSO, Elastic net, and Random forest for environmental mixture data. The motivating datasets have neuro-developmental status as responses and metal measurements and demographic variables as covariates. The challenges for variable selections include (1) many measured metal concentrations are highly correlated, (2) there are many possible ways of modeling interactions among the metals, (3) the relationships between the outcomes and explanatory variables are possibly nonlinear, (4) the signal to noise ratio in the real data may be low. To compare these methods …


Effect Of Predictor Dependence On Variable Selection For Linear And Log-Linear Regression, Apu Chandra Das Jul 2020

Effect Of Predictor Dependence On Variable Selection For Linear And Log-Linear Regression, Apu Chandra Das

Graduate Theses and Dissertations

We propose a Bayesian approach to the Dirichlet-Multinomial (DM) regression model, which uses horseshoe, Laplace, and horseshoe plus priors for shrinkage and selection. The Dirichlet-Multinomial model can be used to find the significant association between a set of available covariates and taxa for a microbiome sample. We incorporate the covariates in a log-linear regression framework. We design a simulation study to make a comparison among the performance of the three shrinkage priors in terms of estimation accuracy and the ability to detect true signals. Our results have clearly separated the performance of the three priors and indicated that the horseshoe …


On Variable Selections In High-Dimensional Incomplete Data, Tao Sun Jun 2020

On Variable Selections In High-Dimensional Incomplete Data, Tao Sun

Major Papers

Modern Statistics has entered the era of Big Data, wherein data sets are too large, high-dimensional, incomplete and complex for most classical statistical methods. This analysis of Big data firstly focuses on missing data. We compare different multiple imputation methods. Combining the characteristics of medical high-throughput experiments, we compared multivariate imputation by chained equations (MICE), missing forest (missForest), as well as self-training selection (STS) methods. A phenotypic data set of common lung disease was assessed. Moreover, in terms of improving the interpretability and predictability of the model, variable selection plays a pivotal role in the following analysis. Taking the Lasso-Poisson …


Novel Bayesian Methodology In Multivariate Problems., Debamita Kundu Aug 2019

Novel Bayesian Methodology In Multivariate Problems., Debamita Kundu

Electronic Theses and Dissertations

This dissertation involves developing novel Bayesian methodology for multivariate problems. In particular, it focuses on two contexts: shrinkage based variable selection in multivariate regression and simultaneous covariance estimation of multiple groups. Both these projects are centered around fully Bayesian inference schemes based on hierarchical modeling to capture context-specific features of the data and the development of computationally efficient estimation algorithm. Variable selection over a potentially large set of covariates in a linear model is quite popular. In the Bayesian context, common prior choices can lead to a posterior expectation of the regression coefficients that is a sparse (or nearly sparse) …


Variable Selection Techniques For Clustering On The Unit Hypersphere, Damon Bayer Jan 2018

Variable Selection Techniques For Clustering On The Unit Hypersphere, Damon Bayer

Electronic Theses and Dissertations

Mixtures of von Mises-Fisher distributions have been shown to be an effective model for clustering data on a unit hypersphere, but variable selection for these models remains an important and challenging problem. In this paper, we derive two variants of the expectation-maximization framework, which are each used to identify a specific type of irrelevant variables for these models. The first type are noise variables, which are not useful for separating any pairs of clusters. The second type are redundant variables, which may be useful for separating pairs of clusters, but do not enable any additional separation beyond the separability provided …


A Bayesian Variable Selection Method With Applications To Spatial Data, Xiahan Tang May 2017

A Bayesian Variable Selection Method With Applications To Spatial Data, Xiahan Tang

Graduate Theses and Dissertations

This thesis first describes the general idea behind Bayes Inference, various sampling methods based on Bayes theorem and many examples. Then a Bayes approach to model selection, called Stochastic Search Variable Selection (SSVS) is discussed. It was originally proposed by George and McCulloch (1993). In a normal regression model where the number of covariates is large, only a small subset tend to be significant most of the times. This Bayes procedure specifies a mixture prior for each of the unknown regression coefficient, the mixture prior was originally proposed by Geweke (1996). This mixture prior will be updated as data becomes …


Model-Free Variable Screening, Sparse Regression Analysis And Other Applications With Optimal Transformations, Qiming Huang Aug 2016

Model-Free Variable Screening, Sparse Regression Analysis And Other Applications With Optimal Transformations, Qiming Huang

Open Access Dissertations

Variable screening and variable selection methods play important roles in modeling high dimensional data. Variable screening is the process of filtering out irrelevant variables, with the aim to reduce the dimensionality from ultrahigh to high while retaining all important variables. Variable selection is the process of selecting a subset of relevant variables for use in model construction. The main theme of this thesis is to develop variable screening and variable selection methods for high dimensional data analysis. In particular, we will present two relevant methods for variable screening and selection under a unified framework based on optimal transformations.

In the …


Variable Selection Via Penalized Regression And The Genetic Algorithm Using Information Complexity, With Applications For High-Dimensional -Omics Data, Tyler J. Massaro Aug 2016

Variable Selection Via Penalized Regression And The Genetic Algorithm Using Information Complexity, With Applications For High-Dimensional -Omics Data, Tyler J. Massaro

Doctoral Dissertations

This dissertation is a collection of examples, algorithms, and techniques for researchers interested in selecting influential variables from statistical regression models. Chapters 1, 2, and 3 provide background information that will be used throughout the remaining chapters, on topics including but not limited to information complexity, model selection, covariance estimation, stepwise variable selection, penalized regression, and especially the genetic algorithm (GA) approach to variable subsetting.

In chapter 4, we fully develop the framework for performing GA subset selection in logistic regression models. We present advantages of this approach against stepwise and elastic net regularized regression in selecting variables from a …


Bivariate Negative Binomial Hurdle With Random Spatial Effects, Robert Mcnutt Apr 2016

Bivariate Negative Binomial Hurdle With Random Spatial Effects, Robert Mcnutt

Dissertations

Count data with excess zeros widely occur in ecology, epidemiology, marketing, and many other disciplines. Mixture distributions consisting of a point mass at zero and a separate discrete distribution are often employed in regression models to account for excessive zero observations in the data. While Poisson models are very popular for count data, Negative Binomial models provide greater flexibility due to their ability to account for overdispersion.

This research focuses on developing a method for analyzing bivariate count data with excess zeros collected over a lattice. A bivariate Zero-Inflated Negative Binomial Hurdle (ZINBH) regression model with spatial random effects is …


Roughened Random Forests For Binary Classification, Kuangnan Xiong Jan 2014

Roughened Random Forests For Binary Classification, Kuangnan Xiong

Legacy Theses & Dissertations (2009 - 2024)

Binary classification plays an important role in many decision-making processes. Random forests can build a strong ensemble classifier by combining weaker classification trees that are de-correlated. The strength and correlation among individual classification trees are the key factors that contribute to the ensemble performance of random forests. We propose roughened random forests, a new set of tools which show further improvement over random forests in binary classification. Roughened random forests modify the original dataset for each classification tree and further reduce the correlation among individual classification trees. This data modification process is composed of artificially imposing missing data that are …


Sparse Ridge Fusion For Linear Regression, Nozad Mahmood Jan 2013

Sparse Ridge Fusion For Linear Regression, Nozad Mahmood

Electronic Theses and Dissertations

For a linear regression, the traditional technique deals with a case where the number of observations n more than the number of predictor variables p (n > p). In the case n < p, the classical method fails to estimate the coefficients. A solution of the problem is the case of correlated predictors is provided in this thesis. A new regularization and variable selection is proposed under the name of Sparse Ridge Fusion (SRF). In the case of highly correlated predictor, the simulated examples and a real data show that the SRF always outperforms the lasso, eleastic net, and the S-Lasso, and the results show that the SRF selects more predictor variables than the sample size n while the maximum selected variables by lasso is n size.


Variable Selection And Parameter Estimation Using A Continuous And Differentiable Approximation To The L0 Penalty Function, Douglas Nielsen Vanderwerken Mar 2011

Variable Selection And Parameter Estimation Using A Continuous And Differentiable Approximation To The L0 Penalty Function, Douglas Nielsen Vanderwerken

Theses and Dissertations

L0 penalized likelihood procedures like Mallows' Cp, AIC, and BIC directly penalize for the number of variables included in a regression model. This is a straightforward approach to the problem of overfitting, and these methods are now part of every statistician's repertoire. However, these procedures have been shown to sometimes result in unstable parameter estimates as a result on the L0 penalty's discontinuity at zero. One proposed alternative, seamless-L0 (SELO), utilizes a continuous penalty function that mimics L0 and allows for stable estimates. Like other similar methods (e.g. LASSO and SCAD), SELO produces sparse solutions because the penalty function is …


Cost-Efficient Variable Selection Using Branching Lars, Li Hua Yue Nov 2010

Cost-Efficient Variable Selection Using Branching Lars, Li Hua Yue

Electronic Thesis and Dissertation Repository

Variable selection is a difficult problem in statistical model building. Identification of cost efficient diagnostic factors is very important to health researchers, but most variable selection methods do not take into account the cost of collecting data for the predictors. The trade off between statistical significance and cost of collecting data for the statistical model is our focus. A Branching LARS (BLARS) procedure has been developed that can select and estimate the important predictors to build a model not only good at prediction but also cost efficient. BLARS method is an extension of the LARS variable selection method to incorporate …


Model Selection With Information Criteria, Changjiang Xu Oct 2010

Model Selection With Information Criteria, Changjiang Xu

Electronic Thesis and Dissertation Repository

This thesis is on model selection using information criteria. The information criteria include generalized information criterion and a family of Bayesian information criteria. The properties and improvement of the information criteria are investigated.

We analyze nonasymptotic and asymptotic properties of the information criteria for linear models, probabilistic models, and high dimensional models, respectively. We give probability of selecting a model and compute the probability by Monte Carlo methods. We derive the conditions under which the criteria are consistent, underfitting, or overfitting.

We further propose new model selection procedures to improve the information criteria. The procedures combine the information criteria with …


Shrinkage Estimation In Partially Linear Models With Measurement Error, Yifang Li May 2010

Shrinkage Estimation In Partially Linear Models With Measurement Error, Yifang Li

All Theses

In practice, measurement error in the covariates is often encountered. Measurement error has several effects when using ordinary least squares for the regression problems. In this thesis, we introduce the basic idea of correcting the bias caused by different types of measurement error. We then focus on the variable selection for partially linear models when some of the covariates are measured with additive errors. The bias caused by the measurement error is corrected by subtracting a bias correction term in the squared loss function. Adaptive LASSO is used for the variable selection procedure. The rate of convergence and the asymptotic …


Variable Selection In Competing Risks Using The L1-Penalized Cox Model, Xiangrong Kong Sep 2008

Variable Selection In Competing Risks Using The L1-Penalized Cox Model, Xiangrong Kong

Theses and Dissertations

One situation in survival analysis is that the failure of an individual can happen because of one of multiple distinct causes. Survival data generated in this scenario are commonly referred to as competing risks data. One of the major tasks, when examining survival data, is to assess the dependence of survival time on explanatory variables. In competing risks, as with ordinary univariate survival data, there may be explanatory variables associated with the risks raised from the different causes being studied. The same variable might have different degrees of influence on the risks due to different causes. Given a set of …