Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

PDF

Brigham Young University

Faculty Publications

Neural networks

Articles 1 - 8 of 8

Full-Text Articles in Physical Sciences and Mathematics

Sub-Symbolic Re-Representation To Facilitate Learning Transfer, Dan A. Ventura Mar 2008

Sub-Symbolic Re-Representation To Facilitate Learning Transfer, Dan A. Ventura

Faculty Publications

We consider the issue of knowledge (re-)representation in the context of learning transfer and present a subsymbolic approach for effecting such transfer. Given a set of data, manifold learning is used to automatically organize the data into one or more representational transformations, which are then learned with a set of neural networks. The result is a set of neural filters that can be applied to new data as re-representation operators. Encouraging preliminary empirical results elucidate the approach and demonstrate its feasibility, suggesting possible implications for the broader field of creativity.


Effectively Using Recurrently-Connected Spiking Neural Networks, Eric Goodman, Dan A. Ventura Jul 2005

Effectively Using Recurrently-Connected Spiking Neural Networks, Eric Goodman, Dan A. Ventura

Faculty Publications

Recurrently-connected spiking neural networks are difficult to use and understand because of the complex nonlinear dynamics of the system. Through empirical studies of spiking networks, we deduce several principles which are critical to success. Network parameters such as synaptic time delays and time constants and the connection probabilities can be adjusted to have a significant impact on accuracy. We show how to adjust these parameters to fit the type of problem.


Simplifying Ocr Neural Networks With Oracle Learning, Tony R. Martinez, Joshua Menke May 2003

Simplifying Ocr Neural Networks With Oracle Learning, Tony R. Martinez, Joshua Menke

Faculty Publications

Often the best model to solve a real world problem is relatively complex. The following presents oracle learning, a method using a larger model as an oracle to train a smaller model on unlabeled data in order to obtain (1) a simpler acceptable model and (2) improved results over standard training methods on a similarly sized smaller model. In particular, this paper looks at oracle learning as applied to multi-layer perceptrons trained using standard backpropagation. For optical character recognition, oracle learning results in an 11.40% average decrease in error over direct training while maintaining 98.95% of the initial oracle accuracy.


Concurrently Learning Neural Nets: Encouraging Optimal Behavior In Cooperative Reinforcement Learning Systems, Nancy Fulda, Dan A. Ventura May 2003

Concurrently Learning Neural Nets: Encouraging Optimal Behavior In Cooperative Reinforcement Learning Systems, Nancy Fulda, Dan A. Ventura

Faculty Publications

Reinforcement learning agents interacting in a common environment often fail to converge to optimal system behaviors even when the individual goals of the agents are fully compatible. Claus and Boutilier have demonstrated that the use of joint action learning helps to overcome these difficulties for Q-learning systems. This paper studies an application of joint action learning to systems of neural networks. Neural networks are a desirable candidate for such augmentations for two reasons: (1) they may be able to generalize more effectively than Q-learners, and (2) the network topology used may improve the scalability of joint action learning to systems …


A Memory-Based Approach To Cantonese Tone Recognition, Deryle W. Lonsdale, Michael Emonts Jan 2003

A Memory-Based Approach To Cantonese Tone Recognition, Deryle W. Lonsdale, Michael Emonts

Faculty Publications

This paper introduces memory-based learning as a viable approach for Cantonese tone recognition. The memorybased learning algorithm employed here outperforms other documented current approaches for this problem, which is based on neural networks. Various numbers of tones and features are modeled to find the best method for feature selection and extraction. To further optimize this approach, experiments are performed to isolate the best feature weighting method, the best class voting weights method, and the best number of k-values to implement. Results and possible future work are discussed.


Improving Speech Recognition Learning Through Lazy Training, Tony R. Martinez, Michael E. Rimer, D. Randall Wilson May 2002

Improving Speech Recognition Learning Through Lazy Training, Tony R. Martinez, Michael E. Rimer, D. Randall Wilson

Faculty Publications

Multi-layer backpropagation, like most learning algorithms that can create complex decision surfaces, is prone to overfitting. We present a novel approach, called lazy training, for reducing the overfit in multiple-layer networks. Lazy training consistently reduces generalization error of optimized neural networks by more than half on a large OCR dataset and on several real world problems from the UCI machine learning database repository. Here, lazy training is shown to be effective in a multi-layered adaptive learning system, reducing the error of an optimized backpropagation network in a speech recognition system by 50.0% on the TIDIGITS corpus.


Robust Optimization Using Training Set Evolution, Tony R. Martinez, Dan A. Ventura Jun 1996

Robust Optimization Using Training Set Evolution, Tony R. Martinez, Dan A. Ventura

Faculty Publications

Training Set Evolution is an eclectic optimization technique that combines evolutionary computation (EC) with neural networks (NN). The synthesis of EC with NN provides both initial unsupervised random exploration of the solution space as well as supervised generalization on those initial solutions. An assimilation of a large amount of data obtained over many simulations provides encouraging empirical evidence for the robustness of Evolutionary Training Sets as an optimization technique for feedback and control problems.


Digital Neural Networks, Tony R. Martinez Jan 1988

Digital Neural Networks, Tony R. Martinez

Faculty Publications

Demands for applications requiring massive parallelism in symbolic environments have given rebirth to research in models labeled as neural networks. These models are made up of many simple nodes which are highly interconnected such that computation takes place as data flows amongst the nodes of the network. To present, most models have proposed nodes based on simple analog functions, where inputs are multiplied by weights and summed, the total then optionally being transformed by an arbitrary function at the node. Learning in these systems is accomplished by adjusting the weights on the input lines. This paper discusses the use of …