This book presents the latest findings on stochastic dynamic programming
models and on solving optimal control problems in networks. It includes
the authors' new findings on determining the optimal solution of
discrete optimal control problems in networks and on solving game
variants of Markov decision problems in the context of computational
networks. First, the book studies the finite state space of Markov
processes and reviews the existing methods and algorithms for
determining the main characteristics in Markov chains, before proposing
new approaches based on dynamic programming and combinatorial methods.
Chapter two is dedicated to infinite horizon stochastic discrete optimal
control models and Markov decision problems with average and expected
total discounted optimization criteria, while Chapter three develops a
special game-theoretical approach to Markov decision processes and
stochastic discrete optimal control problems. In closing, the book's
final chapter is devoted to finite horizon stochastic control problems
and Markov decision processes. The algorithms developed represent a
valuable contribution to the important field of computational network
theory.