Open Access. Powered by Scholars. Published by Universities.®

Digital Commons Network

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 2 of 2

Full-Text Articles in Entire DC Network

Parallel Implementation Of A Recursive Least Squares Neural Network Training Method On The Intel Ipsc/2, James Edward Steck, Bruce M. Mcmillin, K. Krishnamurthy, M. Reza Ashouri, Gary G. Leininger Jun 1990

Parallel Implementation Of A Recursive Least Squares Neural Network Training Method On The Intel Ipsc/2, James Edward Steck, Bruce M. Mcmillin, K. Krishnamurthy, M. Reza Ashouri, Gary G. Leininger

Computer Science Faculty Research & Creative Works

An algorithm based on the Marquardt-Levenberg least-square optimization method has been shown by S. Kollias and D. Anastassiou (IEEE Trans. on Circuits Syst. vol.36, no.8, p.1092-101, Aug. 1989) to be a much more efficient training method than gradient descent, when applied to some small feedforward neural networks. Yet, for many applications, the increase in computational complexity of the method outweighs any gain in learning rate obtained over current training methods. However, the least-squares method can be more efficiently implemented on parallel architectures than standard methods. This is demonstrated by comparing computation times and learning rates for the least-squares method implemented …


Neural Networks In Manufacturing: Possible Impacts On Cutting Stock Problems, Cihan H. Dagli Jan 1990

Neural Networks In Manufacturing: Possible Impacts On Cutting Stock Problems, Cihan H. Dagli

Engineering Management and Systems Engineering Faculty Research & Creative Works

The potential of neural networks is examined, and the effect of parallel processing on the solution of the stock-cutting problem is assessed. The conceptual model proposed integrates a feature-recognition network and a simulated annealing approach. The model uses a neocognitron neural network paradigm to generate data for assessing the degree of match between two irregular patterns. The information generated through the feature recognition network is passed to an energy function, and the optimal configuration of patterns is computed using a simulated annealing algorithm. Basics of the approach are demonstrated with an example.