Source coding theory has as its goal the characterization of the optimal
performance achievable in idealized communication systems which must
code an information source for transmission over a digital communication
or storage channel for transmission to a user. The user must decode the
information into a form that is a good approximation to the original. A
code is optimal within some class if it achieves the best possible
fidelity given whatever constraints are imposed on the code by the
available channel. In theory, the primary constraint imposed on a code
by the channel is its rate or resolution, the number of bits per second
or per input symbol that it can transmit from sender to receiver. In the
real world, complexity may be as important as rate. The origins and the
basic form of much of the theory date from Shan- non's classical
development of noiseless source coding and source coding subject to a
fidelity criterion (also called rate-distortion theory) [73] [74].
Shannon combined a probabilistic notion of information with limit theo-
rems from ergodic theory and a random coding technique to describe the
optimal performance of systems with a constrained rate but with uncon-
strained complexity and delay. An alternative approach called asymptotic
or high rate quantization theory based on different techniques and
approx- imations was introduced by Bennett at approximately the same
time [4]. This approach constrained the delay but allowed the rate to
grow large.