Iterative Approximation of Optimal Control for a Class of Nonlinear Systems
Sprache des Titels:
Proceedings of the 15th Latinamerican Control Conference
For nonlinear systems the optimal control law is given by the solution of the Hamilton Jacobi Bellman Equation which can not be solved in a general way. The method proposed in this paper obtains a solution by successive approximation due to the solution of the Generalized Hamilton Jacobi Bellman Equation. Successive improvement of the control law leads to an approximation of the optimal control, which is optimal in a bounded region around the origin. Application of Policy Iteration to an example, an instable, nonlinear, inverse pendulum will demonstrate the capabilities of the whole approach. Two different implementations of Policy Iteration have been applied to this example. One uses simulation to approximate the solution of the Generalized Hamilton Jacobi Bellman Equation and the other is based on a numerical solution. While the first realization requires is computational expensive, but require only little theoretical knowledge, the second one is much faster. The improvement of the control law, in terms of the cost function, gained by this approach is up to 30% with respect to the LQR.