Dynamic Programming is the analysis of multistage decision in the
sequential mode. It is now widely recognized as a tool of great
versatility and power, and is applied to an increasing extent in all
phases of economic analysis, operations research, technology, and also
in mathematical theory itself. In economics and operations research its
impact may someday rival that of linear programming. The importance of
this field is made apparent through a growing number of publications.
Foremost among these is the pioneering work of Bellman. It was he who
originated the basic ideas, formulated the principle of optimality,
recognized its power, coined the terminology, and developed many of the
present applications. Since then mathe- maticians, statisticians,
operations researchers, and economists have come in, laying more
rigorous foundations [KARLIN, BLACKWELL], and developing in depth such
application as to the control of stochastic processes [HoWARD,
JEWELL]. The field of inventory control has almost split off as an
independent branch of Dynamic Programming on which a great deal of
effort has been expended [ARRoW, KARLIN, SCARF], [WIDTIN],
[WAGNER]. Dynamic Programming is also playing an in- creasing role in
modem mathematical control theory [BELLMAN, Adap- tive Control
Processes (1961)]. Some of the most exciting work is going on in
adaptive programming which is closely related to sequential statistical
analysis, particularly in its Bayesian form. In this monograph the
reader is introduced to the basic ideas of Dynamic Programming.