Open Access. Powered by Scholars. Published by Universities.®

Electrical and Computer Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

1996

Artificial neural networks

Articles 1 - 5 of 5

Full-Text Articles in Electrical and Computer Engineering

Similarity-Based Learning For Pattern Classification, Laurene V. Fausett Mar 1996

Similarity-Based Learning For Pattern Classification, Laurene V. Fausett

Electrical Engineering and Computer Science Faculty Publications

Several standard neural networks, including counterpropagation networks, predictive ART networks, and radial basis function networks, are based on a combination of clustering (unsupervised learning) and mapping (supervised learning). A comparison of the characteristics of these networks for pattern classification problems is presented.


Neural Network Architecture For Solving The Algebraic Matrix Riccati Equation, Fredric M. Ham, Emmanuel G. Collins Mar 1996

Neural Network Architecture For Solving The Algebraic Matrix Riccati Equation, Fredric M. Ham, Emmanuel G. Collins

Electrical Engineering and Computer Science Faculty Publications

This paper presents a neurocomputing approach for solving the algebraic matrix Riccati equation. This approach is able to utilize a good initial condition to reduce the computation time in comparison to standard methods for solving the Riccati equation. The repeated solutions of closely related Riccati equations appears in homotopy algorithms to solve certain problems in fixed-architecture control. Hence, the new approach has the potential to significantly speed-up these algorithms. It also has potential applications in adaptive control. The structured neural network architecture is trained using error backpropagation based on a steepest-descent learning rule. An example is given which illustrates the …


Determination Of Adaptively Adjusted Coefficients For Hopfield Neural Networks Utilizing The Energy Function, Chiyeon Park, Donald W. Fausett Mar 1996

Determination Of Adaptively Adjusted Coefficients For Hopfield Neural Networks Utilizing The Energy Function, Chiyeon Park, Donald W. Fausett

Electrical Engineering and Computer Science Faculty Publications

With its potential for parallel computation and general applicability, the Hopfield neural network has been investigated and improved by many researchers in order to extend its usefulness to various combinatorial problems. In spite of its success in several applications within different energy function formulations, determination of the energy coefficients has been based primarily on trial and error methods since no practical and systematic way of finding good values has been available preciously, although some theoretical analyses have been presented. In this paper, we present a methodical procedure which adaptively determines the energy coefficients leading to a valid solution as the …


Partial Least-Squares Regression Neural Network (Plsnet) With Supervised Adaptive Modular Learning, Fredric M. Ham, Ivica Kostanic Mar 1996

Partial Least-Squares Regression Neural Network (Plsnet) With Supervised Adaptive Modular Learning, Fredric M. Ham, Ivica Kostanic

Electrical Engineering and Computer Science Faculty Publications

We present in this paper an adaptive linear neural network architecture called PLSNET. This network is based on partial least-squares (PLS) regression. The architecture is a modular network with stages that are associated with the desired number of PLS factors that are to be retained. PLSNET actually consists of two separate but coupled architectures, PLSNET-C for PLS calibration, and PLSNET-P for prediction (or estimation). We show that PLSNET-C can be trained by supervised learning with three standard Hebbian learning rules that extracts the PLS weight loading vectors, the regression coefficients, and the loading vectors for the univariate output component case …


Comparison Of Function Approximation With Sigmoid And Radial Basis Function Networks, Gary Russell, Laurene V. Fausett Mar 1996

Comparison Of Function Approximation With Sigmoid And Radial Basis Function Networks, Gary Russell, Laurene V. Fausett

Electrical Engineering and Computer Science Faculty Publications

Theoretical and computational results have demonstrated that several types of neural networks have the universal approximation property, i.e., the ability to represent any continuous function to an arbitrary degree of accuracy, given enough hidden units. However, practical considerations, such as the relative advantages of different networks for function approximation using a small to moderate number of hidden units, are not as well understood. This paper presents preliminary results of investigations into the comparison of networks using sigmoidal activation functions and networks using radial basis functions. In particular, we consider the ability of several such networks to learn mappings from the …