Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Aerospace Engineering

Missouri University of Science and Technology

Parallel Architectures

Articles 1 - 2 of 2

Full-Text Articles in Engineering

Hierarchical Neurocontroller Architecture For Robotic Manipulation, Xavier J. R. Avula, Luis C. Rabelo Jan 1992

Hierarchical Neurocontroller Architecture For Robotic Manipulation, Xavier J. R. Avula, Luis C. Rabelo

Chemical and Biochemical Engineering Faculty Research & Creative Works

A hierarchical neurocontroller architecture consisting of two artificial neural network systems for the manipulation of a robotic arm is presented. The higher-level network system participates in the delineation of the robot arm workspace and coordinates transformation and the motion decision-making process. The lower-level network provides the correct sequence of control actions. A straightforward example illustrates the architecture''s capabilities, including speed, adaptability, and computational efficiency


Parallel Implementation Of A Recursive Least Squares Neural Network Training Method On The Intel Ipsc/2, James Edward Steck, Bruce M. Mcmillin, K. Krishnamurthy, M. Reza Ashouri, Gary G. Leininger Jun 1990

Parallel Implementation Of A Recursive Least Squares Neural Network Training Method On The Intel Ipsc/2, James Edward Steck, Bruce M. Mcmillin, K. Krishnamurthy, M. Reza Ashouri, Gary G. Leininger

Computer Science Faculty Research & Creative Works

An algorithm based on the Marquardt-Levenberg least-square optimization method has been shown by S. Kollias and D. Anastassiou (IEEE Trans. on Circuits Syst. vol.36, no.8, p.1092-101, Aug. 1989) to be a much more efficient training method than gradient descent, when applied to some small feedforward neural networks. Yet, for many applications, the increase in computational complexity of the method outweighs any gain in learning rate obtained over current training methods. However, the least-squares method can be more efficiently implemented on parallel architectures than standard methods. This is demonstrated by comparing computation times and learning rates for the least-squares method implemented …