Using material from many different sources in a systematic and unified way, this self-contained book provides both rigorous mathematical theory and practical numerical insights while developing a framework for determining the convergence rate of discrete approximations to optimal control problems. Elements of the framework include the reference point, the truncation error, and a stability theory for the linearized first-order optimality conditions.
Within this framework, the discretized control problem has a stationary point whose distance to the reference point is bounded in terms of the truncation error. The theory applies to a broad range of discretizations and provides completely new insights into the convergence theory for discrete approximations in optimal control, including the relationship between orthogonal collocation and RungeKutta methods.
Throughout the book, derivatives associated with the discretized control problem are expressed in terms of a back-propagated costate. In particular, the objective derivative of a bang-bang or singular control problem with respect to a switch point of the control are obtained, which leads to the efficient solution of a class of nonsmooth control problems using a gradient-based optimizer.