Open Access. Powered by Scholars. Published by Universities.®

Stacking

Publication Year

Articles 1 - 4 of 4

Full-Text Articles in Other Statistics and Probability

In Praise Of Partially Interpretable Predictors, Tri Le, Bertrand S. Clarke Jan 2020

In Praise Of Partially Interpretable Predictors, Tri Le, Bertrand S. Clarke

Department of Statistics: Faculty Publications

Often there is an uninterpretable model that is statistically as good as, if not better than, a successful interpretable model. Accordingly, if one restricts attention to interpretable models, then one may sacrifice predictive power or other desirable properties. A minimal condition for an interpretable, usually parametric, model to be better than another model is that the first should have smallermean-squared error or integratedmean-squared error.We show through a series of examples that this is often not the case and give the asymptotic forms of a variety of interpretable, partially interpretable, and noninterpretable methods. We find techniques that combine aspects of both …


Investigation Of Model Stacking For Drug Sensitivity Prediction, Kevin Matlock, Carlos De Niz, Raziur Rahman, Souparno Ghosh, Ranadip Pal Jan 2018

Investigation Of Model Stacking For Drug Sensitivity Prediction, Kevin Matlock, Carlos De Niz, Raziur Rahman, Souparno Ghosh, Ranadip Pal

Department of Statistics: Faculty Publications

Background: A significant problem in precision medicine is the prediction of drug sensitivity for individual cancer cell lines. Predictive models such as Random Forests have shown promising performance while predicting from individual genomic features such as gene expressions. However, accessibility of various other forms of data types including information on multiple tested drugs necessitates the examination of designing predictive models incorporating the various data types.

Results: We explore the predictive performance of model stacking and the effect of stacking on the predictive bias and squarred error. In addition we discuss the analytical underpinnings supporting the advantages of stacking in reducing …


A Bayes Interpretation Of Stacking For M-Complete And M-Open Settings, Tri Le, Bertrand S. Clarke Jan 2017

A Bayes Interpretation Of Stacking For M-Complete And M-Open Settings, Tri Le, Bertrand S. Clarke

Department of Statistics: Faculty Publications

In M-open problems where no true model can be conceptualized, it is common to back off from modeling and merely seek good prediction. Even in M-complete problems, taking a predictive approach can be very useful. Stacking is a model averaging procedure that gives a composite predictor by combining individual predictors from a list of models using weights that optimize a cross validation criterion. We show that the stacking weights also asymptotically minimize a posterior expected loss. Hence we formally provide a Bayesian justification for cross-validation. Often the weights are constrained to be positive and sum to one. For greater generality, …


Comparing Bayes Model Averaging And Stacking When Model Approximation Error Cannot Be Ignored, Bertrand S. Clarke Jan 2003

Comparing Bayes Model Averaging And Stacking When Model Approximation Error Cannot Be Ignored, Bertrand S. Clarke

Department of Statistics: Faculty Publications

We compare Bayes Model Averaging, BMA, to a non-Bayes form of model averaging called stacking. In stacking, the weights are no longer posterior probabilities of models; they are obtained by a technique based on cross-validation. When the correct data generating model (DGM) is on the list of models under consideration BMA is never worse than stacking and often is demonstrably better, provided that the noise level is of order commensurate with the coefficients and explanatory variables. Here, however, we focus on the case that the correct DGM is not on the model list and may not be well approximated by …