Open Access. Powered by Scholars. Published by Universities.®
Articles 1 - 3 of 3
Full-Text Articles in Engineering
Reinforcement Learning Based Output-Feedback Control Of Nonlinear Nonstrict Feedback Discrete-Time Systems With Application To Engines, Peter Shih, Jonathan B. Vance, Brian C. Kaul, Jagannathan Sarangapani, J. A. Drallmeier
Reinforcement Learning Based Output-Feedback Control Of Nonlinear Nonstrict Feedback Discrete-Time Systems With Application To Engines, Peter Shih, Jonathan B. Vance, Brian C. Kaul, Jagannathan Sarangapani, J. A. Drallmeier
Electrical and Computer Engineering Faculty Research & Creative Works
A novel reinforcement-learning based output-adaptive neural network (NN) controller, also referred as the adaptive-critic NN controller, is developed to track a desired trajectory for a class of complex nonlinear discrete-time systems in the presence of bounded and unknown disturbances. The controller includes an observer for estimating states and the outputs, critic, and two action NNs for generating virtual, and actual control inputs. The critic approximates certain strategic utility function and the action NNs are used to minimize both the strategic utility function and their outputs. All NN weights adapt online towards minimization of a performance index, utilizing gradient-descent based rule. …
Near Optimal Neural Network-Based Output Feedback Control Of Affine Nonlinear Discrete-Time Systems, Qinmin Yang, Jagannathan Sarangapani
Near Optimal Neural Network-Based Output Feedback Control Of Affine Nonlinear Discrete-Time Systems, Qinmin Yang, Jagannathan Sarangapani
Electrical and Computer Engineering Faculty Research & Creative Works
In this paper, a novel online reinforcement learning neural network (NN)-based optimal output feedback controller, referred to as adaptive critic controller, is proposed for affine nonlinear discrete-time systems, to deliver a desired tracking performance. The adaptive critic design consist of three entities, an observer to estimate the system states, an action network that produces optimal control input and a critic that evaluates the performance of the action network. The critic is termed adaptive as it adapts itself to output the optimal cost-to-go function which is based on the standard Bellman equation. By using the Lyapunov approach, the uniformly ultimate boundedness …
Online Reinforcement Learning Control Of Unknown Nonaffine Nonlinear Discrete Time Systems, Qinmin Yang, Jagannathan Sarangapani
Online Reinforcement Learning Control Of Unknown Nonaffine Nonlinear Discrete Time Systems, Qinmin Yang, Jagannathan Sarangapani
Electrical and Computer Engineering Faculty Research & Creative Works
In this paper, a novel neural network (NN) based online reinforcement learning controller is designed for nonaffine nonlinear discrete-time systems with bounded disturbances. The nonaffine systems are represented by nonlinear auto regressive moving average with exogenous input (NARMAX) model with unknown nonlinear functions. An equivalent affine-like representation for the tracking error dynamics is developed first from the original nonaffine system. Subsequently, a reinforcement learning-based neural network (NN) controller is proposed for the affine-like nonlinear error dynamic system. The control scheme consists of two NNs. One NN is designated as the critic, which approximates a predefined long-term cost function, whereas an …