THE CONTEXT OF PARALLEL PROCESSING The field of digital computer
architecture has grown explosively in the past two decades. Through a
steady stream of experimental research, tool-building efforts, and
theoretical studies, the design of an instruction-set architecture, once
considered an art, has been transformed into one of the most
quantitative branches of computer technology. At the same time, better
understanding of various forms of concurrency, from standard pipelining
to massive parallelism, and invention of architectural structures to
support a reasonably efficient and user-friendly programming model for
such systems, has allowed hardware performance to continue its
exponential growth. This trend is expected to continue in the near
future. This explosive growth, linked with the expectation that
performance will continue its exponential rise with each new generation
of hardware and that (in stark contrast to software) computer hardware
will function correctly as soon as it comes off the assembly line, has
its down side. It has led to unprecedented hardware complexity and
almost intolerable dev- opment costs. The challenge facing current and
future computer designers is to institute simplicity where we now have
complexity; to use fundamental theories being developed in this area to
gain performance and ease-of-use benefits from simpler circuits; to
understand the interplay between technological capabilities and
limitations, on the one hand, and design decisions based on user and
application requirements on the other.