Lagrange and penalty function methods provide a powerful approach, both
as a theoretical tool and a computational vehicle, for the study of
constrained optimization problems. However, for a nonconvex constrained
optimization problem, the classical Lagrange primal-dual method may fail
to find a mini- mum as a zero duality gap is not always guaranteed. A
large penalty parameter is, in general, required for classical quadratic
penalty functions in order that minima of penalty problems are a good
approximation to those of the original constrained optimization
problems. It is well-known that penaity functions with too large
parameters cause an obstacle for numerical implementation. Thus the
question arises how to generalize classical Lagrange and penalty
functions, in order to obtain an appropriate scheme for reducing
constrained optimiza- tion problems to unconstrained ones that will be
suitable for sufficiently broad classes of optimization problems from
both the theoretical and computational viewpoints. Some approaches for
such a scheme are studied in this book. One of them is as follows: an
unconstrained problem is constructed, where the objective function is a
convolution of the objective and constraint functions of the original
problem. While a linear convolution leads to a classical Lagrange
function, different kinds of nonlinear convolutions lead to interesting
generalizations. We shall call functions that appear as a convolution of
the objective function and the constraint functions, Lagrange-type
functions.