Open Access. Powered by Scholars. Published by Universities.®

Digital Commons Network

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 2 of 2

Full-Text Articles in Entire DC Network

The Waveform Relaxation Method For Systems Of Differential/Algebraic Equations, Marija D. Ilić, Mariesa Crow Dec 1990

The Waveform Relaxation Method For Systems Of Differential/Algebraic Equations, Marija D. Ilić, Mariesa Crow

Electrical and Computer Engineering Faculty Research & Creative Works

An extension of the waveform relaxation (WR) algorithm to systems of differential/algebraic equations (DAE) is presented. Although this type of application has been explored earlier in relation to VLSI circuits, the algorithm has not been generalized to include the vast array of DAE system structures. The solvability and convergence requirements of the WR algorithm for higher-index systems are established. Many systems in robotics and control applications are modeled with DAE systems having an index greater than two. Computer simulation of these systems has been hampered by numerical integration methods which perform poorly and must be explicitly tailored to the system. …


Parallel Implementation Of A Recursive Least Squares Neural Network Training Method On The Intel Ipsc/2, James Edward Steck, Bruce M. Mcmillin, K. Krishnamurthy, M. Reza Ashouri, Gary G. Leininger Jun 1990

Parallel Implementation Of A Recursive Least Squares Neural Network Training Method On The Intel Ipsc/2, James Edward Steck, Bruce M. Mcmillin, K. Krishnamurthy, M. Reza Ashouri, Gary G. Leininger

Computer Science Faculty Research & Creative Works

An algorithm based on the Marquardt-Levenberg least-square optimization method has been shown by S. Kollias and D. Anastassiou (IEEE Trans. on Circuits Syst. vol.36, no.8, p.1092-101, Aug. 1989) to be a much more efficient training method than gradient descent, when applied to some small feedforward neural networks. Yet, for many applications, the increase in computational complexity of the method outweighs any gain in learning rate obtained over current training methods. However, the least-squares method can be more efficiently implemented on parallel architectures than standard methods. This is demonstrated by comparing computation times and learning rates for the least-squares method implemented …