One of the main problems in control theory is the stabilization problem
consisting of finding a feedback control law ensuring stability; when
the linear approximation is considered, the nat- ural problem is
stabilization of a linear system by linear state feedback or by using a
linear dynamic controller. This prob- lem was intensively studied during
the last decades and many important results have been obtained. The
present monograph is based mainly on results obtained by the authors. It
focuses on stabilization of systems with slow and fast motions, on
stabilization procedures that use only poor information about the system
(high-gain stabilization and adaptive stabilization), and also on
discrete time implementa- tion of the stabilizing procedures. These
topics are important in many applications of stabilization theory. We
hope that this monograph may illustrate the way in which mathematical
theories do influence advanced technol- ogy. This book is not intended
to be a text book nor a guide for control-designers. In engineering
practice, control-design is a very complex task in which stability is
only one of the re- quirements and many aspects and facets of the
problem have to be taken into consideration. Even if we restrict
ourselves to stabilization, the book does not provide just recipes, but
it fo- cuses more on the ideas lying behind the recipes. In short, this
is not a book on control, but on some mathematics of control.