The quadratic cost optimal control problem for systems described by
linear ordinary differential equations occupies a central role in the
study of control systems both from the theoretical and design points of
view. The study of this problem over an infinite time horizon shows the
beautiful interplay between optimality and the qualitative properties of
systems such as controllability, observability and stability. This
theory is far more difficult for infinite-dimensional systems such as
systems with time delay and distributed parameter systems. In the first
place, the difficulty stems from the essential unboundedness of the
system operator. Secondly, when control and observation are exercised
through the boundary of the domain, the operator representing the sensor
and actuator are also often unbounded.
The present book, in two volumes, is in some sense a self-contained
account of this theory of quadratic cost optimal control for a large
class of infinite-dimensional systems. Volume I deals with the theory of
time evolution of controlled infinite-dimensional systems. It contains a
reasonably complete account of the necessary semigroup theory and the
theory of delay-differential and partial differential equations. Volume
II deals with the optimal control of such systems when performance is
measured via a quadratic cost. It covers recent work on the boundary
control of hyperbolic systems and exact controllability. Some of the
material covered here appears for the first time in book form.
The book should be useful for mathematicians and theoretical engineers
interested in the field of control.