Optimal Control

Optimal Control

 

Optimal Control –  UNIVERSITY OF MARYLAND, College Park,Electrical & Computer Engineering & Institute for Systems Research

Lecture Notes by P. S. Krishnaprasad

Survey Lecture on Linear Systems and link to ENEE 660 System Theory Notes

References

  1. Lecture 1
  2. Lecture 2
  3. Lecture 3
  4. Lecture 4 and an addendum
  5. Lecture 4 Page 12 fix
  6. Lecture 5(a), updated; Lecture 5(b); Lecture 5(c) Lecture 5(c) Update
  7. Lecture 6
  8. Lecture 7 and solution to Queen Dido’s problem
  9. Lecture 7 addendum (on transversality condition)
  10. Lecture 8 on fixed point problems
  11. Lecture 9(a) on Newton’s method and additional material (lecture 9(b)) on
  12. mean value theorem
  13. Lecture 10(a) on Newton’s method and rate of convergence and
  14. Lecture 10(b) on iterative minimization
  15. Lecture 11(a) on second order necessary conditions
  16. Lecture 11(b) on Taylor’s theorem
  17. Lecture 11(c) on second order necessary conditions in the calculus of variations (Legendre)
  18. Lecture 12 on maximum principle
  19. Lecture 13 on Hamilton Jacobi Bellman Equation

——————————————————————–

Lecture Notes by Professor Andre L. Tits

Homework Assignments

 

 

 

 


 

Optimal Control  – University of Illinois at Urbana-Champaign

Texts Book:

  • Dimitri P. Bertsekas, Dynamic Programming and Optimal Control, Volume I, 3rd edition, Athena Scientific, 2005.
  • M. Athans and P.L. Falb, Optimal Control, McGraw Hill, 2007 (paper back).
  • I. M. Gel’fand and S. V. Fomin, Calculus of Variations, Dover Publications, 2000.

Homework | Lectures Notes

Course Outline:

I. Introduction

  1. Formulation of optimal control problems
  2. Parameter optimization versus path optimization
  3. Local and global optima; general conditions on existence and uniquenes.
  4. Some basic facts from finite-dimensional optimization.

II. The Calculus of Variations

  1. The Euler-Lagrange equation
  2. Path optimization subject to constraints
  3. Weak and strong extrema

III. The Minimum (Maximum) Principle and the Hamilton-Jacobi Theory

  1. Pontryagin’s minimum principle
  2. Optimal control with state and control constraints
  3. Time-optimal control
  4. Singular solutions
  5. Hamilton-Jacobi-Bellman (HJB) equation, and dynamical programming
  6. Viscosity solutions to HJB

IV. Linear Quadratic Gaussian (LQG) Problems

  1. Finite-time and infinite-time state (or output) regulators
  2. Riccati equation and its properties
  3. Tracking and disturbance rejection
  4. Kalman filter and duality
  5. The LQG design

V. Nonholonomic System Optimal Control
VI. Game Theoretic Optimal Control Design

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *