Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Wayne State University

Numerical Analysis and Computation

Optimal control

Articles 1 - 4 of 4

Full-Text Articles in Physical Sciences and Mathematics

On The Lqg Theory With Bounded Control, D. V. Iourtchenko, J. L. Menaldi, A. S. Bratus Oct 2010

On The Lqg Theory With Bounded Control, D. V. Iourtchenko, J. L. Menaldi, A. S. Bratus

Mathematics Faculty Research Publications

We consider a stochastic optimal control problem in the whole space, where the corresponding HJB equation is degenerate, with a quadratic running cost and coeffcients with linear growth. In this paper we provide a full mathematical details on the key estimate relating the asymptotic behavior of the solution as the space variable goes to infinite.


Discrete Maximum Principle For Nonsmooth Optimal Control Problems With Delays, Boris S. Mordukhovich, Ilya Shvartsman Dec 2001

Discrete Maximum Principle For Nonsmooth Optimal Control Problems With Delays, Boris S. Mordukhovich, Ilya Shvartsman

Mathematics Research Reports

We consider optimal control problems for discrete-time systems with delays. The main goal is to derive necessary optimality conditions of the discrete maximum principle type in the case of nonsmooth minimizing functions. We obtain two independent forms of the discrete maximum principle with transversality conditions described in terms of subdifferentials and superdifferentials, respectively. The superdifferential form is new even for non-delayed systems and may be essentially stronger than a more conventional subdifferential form in some situations.


Optimal Control Of Stochastic Integrals And Hamilton-Jacobi-Bellman Equations, Ii, Pierre-Louis Lions, José-Luis Menaldi Jan 1982

Optimal Control Of Stochastic Integrals And Hamilton-Jacobi-Bellman Equations, Ii, Pierre-Louis Lions, José-Luis Menaldi

Mathematics Faculty Research Publications

We consider the solution of a stochastic integral control problem, and we study its regularity. In particular, we characterize the optimal cost as the maximum solution of ∀vV, A(v)u ≤ ƒ(v) in D'(Ο), u = 0 on ∂Ο, uW1,∞(Ο),

where A(v) is a uniformly elliptic second order operator and V is the set of the values of the control.


Optimal Control Of Stochastic Integrals And Hamilton-Jacobi-Bellman Equations, I, Pierre-Louis Lions, José-Luis Menaldi Jan 1982

Optimal Control Of Stochastic Integrals And Hamilton-Jacobi-Bellman Equations, I, Pierre-Louis Lions, José-Luis Menaldi

Mathematics Faculty Research Publications

We consider the solution of a stochastic integral control problem and we study its regularity. In particular, we characterize the optimal cost as the maximum solution of ∀vV, A(v)u ≤ ƒ(v) in D'(Ο), u = 0 on ∂Ο, uW1,∞(Ο),

where A(v) is a uniformly elliptic second order operator and V is the set of the values of the control.