Open Access. Powered by Scholars. Published by Universities.®
- Institution
- Keyword
-
- Functional Data Analysis (2)
- Model selection (2)
- ACT Composite (1)
- Applied mathematics (1)
- Asymptotic Theory (1)
-
- Bayesian inference; Function-on-function regression; Functional data analysis; Functional mixed models; Wavelet regression (1)
- Bootstrapping (1)
- Casualty and Loss (1)
- Complex systems model (1)
- Counter-terrorism (1)
- EM algorithm (1)
- Economic indicators (1)
- Energy use (1)
- Estimation methods (1)
- Exploratory spatial data analysis (1)
- Extremism (1)
- FOMC (1)
- Function-on-Function Regression (1)
- Functional Discriminant Analysis (1)
- Functional Mixed Models (1)
- Functional Predictor Regression (1)
- Functional Response Regression (1)
- GIS (1)
- Gaussian mixture models (1)
- Geographically weighted regression (1)
- Geographically-weighted regression (1)
- High School GPA (1)
- High School Math (1)
- Image Analysis (1)
- Inflation (1)
- Publication
- Publication Type
Articles 1 - 10 of 10
Full-Text Articles in Statistical Models
Gis-Integrated Mathematical Modeling Of Social Phenomena At Macro- And Micro- Levels—A Multivariate Geographically-Weighted Regression Model For Identifying Locations Vulnerable To Hosting Terrorist Safe-Houses: France As Case Study, Elyktra Eisman
FIU Electronic Theses and Dissertations
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to …
Variable Selection In Single Index Varying Coefficient Models With Lasso, Peng Wang
Variable Selection In Single Index Varying Coefficient Models With Lasso, Peng Wang
Doctoral Dissertations
Single index varying coefficient model is a very attractive statistical model due to its ability to reduce dimensions and easy-of-interpretation. There are many theoretical studies and practical applications with it, but typically without features of variable selection, and no public software is available for solving it. Here we propose a new algorithm to fit the single index varying coefficient model, and to carry variable selection in the index part with LASSO. The core idea is a two-step scheme which alternates between estimating coefficient functions and selecting-and-estimating the single index. Both in simulation and in application to a Geoscience dataset, we …
Model Selection For Gaussian Mixture Models For Uncertainty Qualification, Yiyi Chen, Guang Lin, Xuan Liu
Model Selection For Gaussian Mixture Models For Uncertainty Qualification, Yiyi Chen, Guang Lin, Xuan Liu
The Summer Undergraduate Research Fellowship (SURF) Symposium
Clustering is task of assigning the objects into different groups so that the objects are more similar to each other than in other groups. Gaussian Mixture model with Expectation Maximization method is the one of the most general ways to do clustering on large data set. However, this method needs the number of Gaussian mode as input(a cluster) so it could approximate the original data set. Developing a method to automatically determine the number of single distribution model will help to apply this method to more larger context. In the original algorithm, there is a variable represent the weight of …
Using Spatiotemporal Methods To Fill Gaps In Energy Usage Interval Data, Kristin K. Graves
Using Spatiotemporal Methods To Fill Gaps In Energy Usage Interval Data, Kristin K. Graves
Theses and Dissertations
Researchers analyzing spatiotemporal or panel data, which varies both in location and over time, often find that their data has holes or gaps. This thesis explores alternative methods for filling those gaps and also suggests a set of techniques for evaluating those gap-filling methods to determine which works best.
The Effects Of Quantitative Easing In The United States: Implications For Future Central Bank Policy Makers, Matthew Q. Rubino
The Effects Of Quantitative Easing In The United States: Implications For Future Central Bank Policy Makers, Matthew Q. Rubino
Senior Honors Projects, 2010-2019
The purpose of this thesis is to examine the effects of the Federal Reserve’s recent bond buying programs, specifically Quantitative Easing 1, Quantitative Easing 2, Operation Twist (or the Fed’s Maturity Extension Program), and Quantitative Easing 3. In this study, I provide a picture of the economic landscape leading up to the deployment of the programs, an overview of quantitative easing including each program’s respective objectives, and how and why the Fed decided to implement the programs. Using empirical analysis, I measure each program’s effectiveness by applying four models including a yield curve model, an inflation model, a money supply …
Examining The Performance Of The Metropolis-Hastings Robbins-Monro Algorithm In The Estimation Of Multilevel Multidimensional Irt Models, Bozhidar M. Bashkov
Examining The Performance Of The Metropolis-Hastings Robbins-Monro Algorithm In The Estimation Of Multilevel Multidimensional Irt Models, Bozhidar M. Bashkov
Dissertations, 2014-2019
The purpose of this study was to review the challenges that exist in the estimation of complex (multidimensional) models applied to complex (multilevel) data and to examine the performance of the recently developed Metropolis-Hastings Robbins-Monro (MH-RM) algorithm (Cai, 2010a, 2010b), designed to overcome these challenges and implemented in both commercial and open-source software programs. Unlike other methods, which either rely on high-dimensional numerical integration or approximation of the entire multidimensional response surface, MH-RM makes use of Fisher’s Identity to employ stochastic imputation (i.e., data augmentation) via the Metropolis-Hastings sampler and then apply the stochastic approximation method of Robbins and Monro …
Bootstrapping Vs. Asymptotic Theory In Property And Casualty Loss Reserving, Andrew J. Difronzo Jr.
Bootstrapping Vs. Asymptotic Theory In Property And Casualty Loss Reserving, Andrew J. Difronzo Jr.
Honors Projects in Mathematics
One of the key functions of a property and casualty (P&C) insurance company is loss reserving, which calculates how much money the company should retain in order to pay out future claims. Most P&C insurance companies use non-stochastic (non-random) methods to estimate these future liabilities. However, future loss data can also be projected using generalized linear models (GLMs) and stochastic simulation. Two simulation methods that will be the focus of this project are: bootstrapping methodology, which resamples the original loss data (creating pseudo-data in the process) and fits the GLM parameters based on the new data to estimate the sampling …
Relationship Between High School Math Course Selection And Retention Rates At Otterbein University, Lauren A. Fisher
Relationship Between High School Math Course Selection And Retention Rates At Otterbein University, Lauren A. Fisher
Undergraduate Honors Thesis Projects
Binary logistic regression was used to study the relationship between high school math course selection and retention rates at Otterbein University. Graduation rates from postsecondary institutions are low in the United States and, more specifically, at Otterbein. This study is important in helping to determine what can raise retention rates, and ultimately, graduation rates. It directs focus toward high school math course selection and what should be changed before entering a post-secondary institution. Otterbein will have a better idea of what type of students to recruit and which students may be good candidates with some extra help. Recruiting is expensive, …
Bayesian Function-On-Function Regression For Multi-Level Functional Data, Mark J. Meyer, Brent A. Coull, Francesco Versace, Paul Cinciripini, Jeffrey S. Morris
Bayesian Function-On-Function Regression For Multi-Level Functional Data, Mark J. Meyer, Brent A. Coull, Francesco Versace, Paul Cinciripini, Jeffrey S. Morris
Jeffrey S. Morris
Medical and public health research increasingly involves the collection of more and more complex and high dimensional data. In particular, functional data|where the unit of observation is a curve or set of curves that are finely sampled over a grid -- is frequently obtained. Moreover, researchers often sample multiple curves per person resulting in repeated functional measures. A common question is how to analyze the relationship between two functional variables. We propose a general function-on-function regression model for repeatedly sampled functional data, presenting a simple model as well as a more extensive mixed model framework, along with multiple functional posterior …
Functional Regression, Jeffrey S. Morris
Functional Regression, Jeffrey S. Morris
Jeffrey S. Morris
Functional data analysis (FDA) involves the analysis of data whose ideal units of observation are functions defined on some continuous domain, and the observed data consist of a sample of functions taken from some population, sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the development of this field, which has accelerated in the past 10 years to become one of the fastest growing areas of statistics, fueled by the growing number of applications yielding this type of data. One unique characteristic of FDA is the need to combine information both across and within functions, which Ramsay and …