This book introduces optimal control problems for large families of
deterministic and stochastic systems with discrete or continuous time
parameter. These families include most of the systems studied in many
disciplines, including Economics, Engineering, Operations Research, and
Management Science, among many others.
The main objective is to give a concise, systematic, and reasonably self
contained presentation of some key topics in optimal control theory. To
this end, most of the analyses are based on the dynamic programming (DP)
technique. This technique is applicable to almost all control problems
that appear in theory and applications. They include, for instance,
finite and infinite horizon control problems in which the underlying
dynamic system follows either a deterministic or stochastic difference
or differential equation. In the infinite horizon case, it also uses DP
to study undiscounted problems, such as the ergodic or long-run average
cost.
After a general introduction to control problems, the book covers the
topic dividing into four parts with different dynamical systems: control
of discrete-time deterministic systems, discrete-time stochastic
systems, ordinary differential equations, and finally a general
continuous-time MCP with applications for stochastic differential
equations.
The first and second part should be accessible to undergraduate students
with some knowledge of elementary calculus, linear algebra, and some
concepts from probability theory (random variables, expectations, and so
forth). Whereas the third and fourth part would be appropriate for
advanced undergraduates or graduate students who have a working
knowledge of mathematical analysis (derivatives, integrals, ...) and
stochastic processes.