Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences Commons

Open Access. Powered by Scholars. Published by Universities.®

Brigham Young University

Artificial neural networks

Articles 1 - 6 of 6

Full-Text Articles in Computer Sciences

Using Perceptually Grounded Semantic Models To Autonomously Convey Meaning Through Visual Art, Derrall L. Heath Jun 2016

Using Perceptually Grounded Semantic Models To Autonomously Convey Meaning Through Visual Art, Derrall L. Heath

Theses and Dissertations

Developing advanced semantic models is important in building computational systems that can not only understand language but also convey ideas and concepts to others. Semantic models can allow a creative image-producing-agent to autonomously produce artifacts that communicate an intended meaning. This notion of communicating meaning through art is often considered a necessary part of eliciting an aesthetic experience in the viewer and can thus enhance the (perceived) creativity of the agent. Computational creativity, a subfield of artificial intelligence, deals with designing computational systems and algorithms that either automatically create original and functional products, or that augment the ability of humans …


Improving Neural Network Classification Training, Michael Edwin Rimer Sep 2007

Improving Neural Network Classification Training, Michael Edwin Rimer

Theses and Dissertations

The following work presents a new set of general methods for improving neural network accuracy on classification tasks, grouped under the label of classification-based methods. The central theme of these approaches is to provide problem representations and error functions that more directly improve classification accuracy than conventional learning and error functions. The CB1 algorithm attempts to maximize classification accuracy by selectively backpropagating error only on misclassified training patterns. CB2 incorporates a sliding error threshold to the CB1 algorithm, interpolating between the behavior of CB1 and standard error backpropagation as training progresses in order to avoid prematurely saturated network weights. CB3 …


Learning In Short-Time Horizons With Measurable Costs, Patrick Bowen Mullen Nov 2006

Learning In Short-Time Horizons With Measurable Costs, Patrick Bowen Mullen

Theses and Dissertations

Dynamic pricing is a difficult problem for machine learning. The environment is noisy, dynamic and has a measurable cost associated with exploration that necessitates that learning be done in short-time horizons. These short-time horizons force the learning algorithms to make pricing decisions based on scarce data. In this work, various machine learning algorithms are compared in the context of dynamic pricing. These algorithms include the Kalman filter, artificial neural networks, particle swarm optimization and genetic algorithms. The majority of these algorithms have been modified to handle the pricing problem. The results show that these adaptations allow the learning algorithms to …


A Noise Filtering Method Using Neural Networks, Tony R. Martinez, Xinchuan Zeng May 2003

A Noise Filtering Method Using Neural Networks, Tony R. Martinez, Xinchuan Zeng

Faculty Publications

During the data collecting and labeling process it is possible for noise to be introduced into a data set. As a result, the quality of the data set degrades and experiments and inferences derived from the data set become less reliable. In this paper we present an algorithm, called ANR (automatic noise reduction), as a filtering mechanism to identify and remove noisy data items whose classes have been mislabeled. The underlying mechanism behind ANR is based on a framework of multi-layer artificial neural networks. ANR assigns each data item a soft class label in the form of a class probability …


Speed Training: Improving The Rate Of Backpropagation Learning Through Stochastic Sample Presentation, Timothy L. Andersen, Tony R. Martinez, Michael E. Rimer Jul 2001

Speed Training: Improving The Rate Of Backpropagation Learning Through Stochastic Sample Presentation, Timothy L. Andersen, Tony R. Martinez, Michael E. Rimer

Faculty Publications

Artificial neural networks provide an effective empirical predictive model for pattern classification. However, using complex neural networks to learn very large training sets is often problematic, imposing prohibitive time constraints on the training process. We present four practical methods for dramatically decreasing training time through dynamic stochastic sample presentation, a technique we call speed training. These methods are shown to be robust to retaining generalization accuracy over a diverse collection of real world data sets. In particular, the SET technique achieves a training speedup of 4278% on a large OCR database with no detectable loss in generalization.


A Multi-Chip Module Implementation Of A Neural Network, Tony R. Martinez, George L. Rudolph, Linton G. Salmon, Matthew G. Stout Mar 1994

A Multi-Chip Module Implementation Of A Neural Network, Tony R. Martinez, George L. Rudolph, Linton G. Salmon, Matthew G. Stout

Faculty Publications

The requirement for dense interconnect in artificial neural network systems has led researchers to seek high-density interconnect technologies. This paper reports an implementation using multi-chip modules (MCMs) as the interconnect medium. The specific system described is a self-organizing, parallel, and dynamic learning model which requires a dense interconnect technology for effective implementation; this requirement is fulfilled by exploiting MCM technology. The ideas presented in this paper regarding an MCM implementation of artificial neural networks are versatile and can be adapted to apply to other neural network and connectionist models.