Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Applied Mathematics

Neural networks (Computer science)

Articles 1 - 4 of 4

Full-Text Articles in Physical Sciences and Mathematics

Exploring The Potential Of Sparse Coding For Machine Learning, Sheng Yang Lundquist Oct 2020

Exploring The Potential Of Sparse Coding For Machine Learning, Sheng Yang Lundquist

Dissertations and Theses

While deep learning has proven to be successful for various tasks in the field of computer vision, there are several limitations of deep-learning models when compared to human performance. Specifically, human vision is largely robust to noise and distortions, whereas deep learning performance tends to be brittle to modifications of test images, including being susceptible to adversarial examples. Additionally, deep-learning methods typically require very large collections of training examples for good performance on a task, whereas humans can learn to perform the same task with a much smaller number of training examples.

In this dissertation, I investigate whether the use …


Neural Extensions To Robust Parameter Design, Bernard Jacob Loeffelholz Sep 2010

Neural Extensions To Robust Parameter Design, Bernard Jacob Loeffelholz

Theses and Dissertations

Robust parameter design (RPD) is implemented in systems in which a user wants to minimize the variance of a system response caused by uncontrollable factors while obtaining a consistent and reliable system response over time. We propose the use of artificial neural networks to compensate for highly non-linear problems that quadratic regression fails to accurately model. RPD is conducted under the assumption that the relationship between system response and controllable and uncontrollable variables does not change over time. We propose a methodology to find a new set of settings that will be robust to moderate system degradation while remaining robust …


The Mathematics Of Measuring Capabilities Of Artificial Neural Networks, Martha A. Carter Jun 1995

The Mathematics Of Measuring Capabilities Of Artificial Neural Networks, Martha A. Carter

Theses and Dissertations

Researchers rely on the mathematics of Vapnik and Chervonenkis to capture quantitatively the capabilities of specific artificial neural network (ANN) architectures. The quantifier is known as the V-C dimension, and is defined on functions or sets. Its value is the largest cardinality 1 of a set of vectors in Rd such that there is at least one set of vectors of cardinality 1 such that all dichotomies of that set into two sets can be implemented by the function or set. Stated another way, the V-C dimension of a set of functions is the largest cardinality of a set, such …


Nonlinear Time Series Analysis, James A. Stewart Mar 1995

Nonlinear Time Series Analysis, James A. Stewart

Theses and Dissertations

This thesis applies neural network feature selection techniques to multivariate time series data to improve prediction of a target time series. Two approaches to feature selection are used. First, a subset enumeration method is used to determine which financial indicators are most useful for aiding in prediction of the S&P 500 futures daily price. The candidate indicators evaluated include RSI, Stochastics and several moving averages. Results indicate that the Stochastics and RSI indicators result in better prediction results than the moving averages. The second approach to feature selection is calculation of individual saliency metrics. A new decision boundary-based individual saliency …