Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 11 of 11

Full-Text Articles in Physical Sciences and Mathematics

Optimal Learning Rates For Neural Networks, Tyler Moncur Jul 2020

Optimal Learning Rates For Neural Networks, Tyler Moncur

Theses and Dissertations

Neural networks have long been known as universal function approximators and have more recently been shown to be powerful and versatile in practice. But it can be extremely challenging to find the right set of parameters and hyperparameters. Model training is both expensive and difficult due to the large number of parameters and sensitivity to hyperparameters such as learning rate and architecture. Hyperparameter searches are notorious for requiring tremendous amounts of processing power and human resources. This thesis provides an analytic approach to estimating the optimal value of one of the key hyperparameters in neural networks, the learning rate. Where …


Exponential Stability Of Intrinsically Stable Dynamical Networks And Switched Networks With Time-Varying Time Delays, David Patrick Reber Apr 2019

Exponential Stability Of Intrinsically Stable Dynamical Networks And Switched Networks With Time-Varying Time Delays, David Patrick Reber

Theses and Dissertations

Dynamic processes on real-world networks are time-delayed due to finite processing speeds and the need to transmit data over nonzero distances. These time-delays often destabilize the network's dynamics, but are difficult to analyze because they increase the dimension of the network.We present results outlining an alternative means of analyzing these networks, by focusing analysis on the Lipschitz matrix of the relatively low-dimensional undelayed network. The key criteria, intrinsic stability, is computationally efficient to verify by use of the power method. We demonstrate applications from control theory and neural networks.


Improving The Quality Of Neural Machine Translation Using Terminology Injection, Duane K. Dougal Dec 2018

Improving The Quality Of Neural Machine Translation Using Terminology Injection, Duane K. Dougal

Theses and Dissertations

Most organizations use an increasing number of domain- or organization-specific words and phrases. A translation process, whether human or automated, must also be able to accurately and efficiently use these specific multilingual terminology collections. However, comparatively little has been done to explore the use of vetted terminology as an input to machine translation (MT) for improved results. In fact, no single established process currently exists to integrate terminology into MT as a general practice, and especially no established process for neural machine translation (NMT) exists to ensure that the translation of individual terms is consistent with an approved terminology collection. …


Improving The Separability Of A Reservoir Facilitates Learning Transfer, David Norton, Dan A. Ventura Jun 2009

Improving The Separability Of A Reservoir Facilitates Learning Transfer, David Norton, Dan A. Ventura

Faculty Publications

We use a type of reservoir computing called the liquid state machine (LSM) to explore learning transfer. The Liquid State Machine (LSM) is a neural network model that uses a reservoir of recurrent spiking neurons as a filter for a readout function. We develop a method of training the reservoir, or liquid, that is not driven by residual error. Instead, the liquid is evaluated based on its ability to separate different classes of input into different spatial patterns of neural activity. Using this method, we train liquids on two qualitatively different types of artificial problems. Resulting liquids are shown to …


Improving Liquid State Machines Through Iterative Refinement Of The Reservoir, R David Norton Mar 2008

Improving Liquid State Machines Through Iterative Refinement Of The Reservoir, R David Norton

Theses and Dissertations

Liquid State Machines (LSMs) exploit the power of recurrent spiking neural networks (SNNs) without training the SNN. Instead, a reservoir, or liquid, is randomly created which acts as a filter for a readout function. We develop three methods for iteratively refining a randomly generated liquid to create a more effective one. First, we apply Hebbian learning to LSMs by building the liquid with spike-time dependant plasticity (STDP) synapses. Second, we create an eligibility based reinforcement learning algorithm for synaptic development. Third, we apply principles of Hebbian learning and reinforcement learning to create a new algorithm called separation driven synaptic modification …


Improving Record Linkage Through Pedigrees, Burdette N. Pixton Jul 2006

Improving Record Linkage Through Pedigrees, Burdette N. Pixton

Theses and Dissertations

Record linkage, in a genealogical context, is the process of identifying individuals from multiple sources which refer to the same real-world entity. Current solutions focus on the individuals in question and on complex rules developed by human experts. Genealogical databases are highly-structured with relationships existing between the individuals and other instances. These relationships can be utilized and human involvement greatly minimized by using a filtered structured neural network. These neural networks, using traditional back-propagation methods, are biased in a way to make the network human readable. The results show an increase in precision and recall when pedigree data is available …


Edge Inference For Image Interpolation, Bryan S. Morse, Neil Toronto, Dan A. Ventura Aug 2005

Edge Inference For Image Interpolation, Bryan S. Morse, Neil Toronto, Dan A. Ventura

Faculty Publications

Image interpolation algorithms try to fit a function to a matrix of samples in a "natural-looking" way. This paper presents edge inference, an algorithm that does this by mixing neural network regression with standard image interpolation techniques. Results on gray level images are presented, and it is demonstrated that edge inference is capable of producing sharp, natural-looking results. A technique for reintroducing noise is given, and it is shown that, with noise added using a bicubic interpolant, edge inference can be regarded as a generalization of bicubic interpolation. Extension into RGB color space and additional applications of the algorithm are …


Feature Weighting Using Neural Networks, Tony R. Martinez, Xinchuan Zeng Jul 2004

Feature Weighting Using Neural Networks, Tony R. Martinez, Xinchuan Zeng

Faculty Publications

In this work we propose a feature weighting method for classification tasks by extracting relevant information from a trained neural network. This method weights an attribute based on strengths (weights) of related links in the neural network, in which an important feature is typically connected to strong links and has more impact on the outputs. This method is applied to feature weighting br the nearest neighbor classifier and is tested on 15 real-world classification tasks. The results show that it can improve the nearest neighbor classifier on 14 of the 15 tested tasks, and also outperforms the neural network on …


Learning Multiple Correct Classifications From Incomplete Data Using Weakened Implicit Negatives, Dan A. Ventura, Stephen Whiting Jul 2004

Learning Multiple Correct Classifications From Incomplete Data Using Weakened Implicit Negatives, Dan A. Ventura, Stephen Whiting

Faculty Publications

Classification problems with output class overlap create problems for standard neural network approaches. We present a modification of a simple feed-forward neural network that is capable of learning problems with output overlap, including problems exhibiting hierarchical class structures in the output. Our method of applying weakened implicit negatives to address overlap and ambiguity allows the algorithm to learn a large portion of the hierarchical structure from very incomplete data. Our results show an improvement of approximately 58% over a standard backpropagation network on the hierarchical problem.


Optically Simulating A Quantum Associative Memory, Dan A. Ventura, John C. Howell, John A. Yeazell Sep 2000

Optically Simulating A Quantum Associative Memory, Dan A. Ventura, John C. Howell, John A. Yeazell

Faculty Publications

This paper discusses the realization of a quantum associative memory using linear integrated optics. An associative memory produces a full pattern of bits when presented with only a partial pattern. Quantum computers have the potential to store large numbers of patterns and hence have the ability to far surpass any classical neural network realization of an associative memory. In this work two 3-qubit associative memories will be discussed using linear integrated optics. In addition, corrupted, invented and degenerate memories are discussed.


Optimal Control Using A Neural/Evolutionary Hybrid System, Tony R. Martinez, Dan A. Ventura May 1998

Optimal Control Using A Neural/Evolutionary Hybrid System, Tony R. Martinez, Dan A. Ventura

Faculty Publications

One of the biggest hurdles to developing neurocontrollers is the difficulty in establishing good training data for the neural network. We propose a hybrid approach to the development of neurocontrollers that employs both evolutionary computation (EC) and neural networks (NN). EC is used to discover appropriate control actions for specific plant states. The survivors of the evolutionary process are used to construct a training set for the NN. The NN leams the training set, is able to generalize to new plant states, and is then used for neurocontrol. Thus the EC/NN approach combines the broad, parallel search of EC with …