Parallel computing is simultaneous use of multiple computing resources
to solve a computing problem, to reduce the computation time. To make
many processors simultaneously work on a single program, the program
must be divided into smaller independent chunks so that each processor
can work on separate chunks of the problem. Parallel computing is the
only available method to increase the computational speed and its
application is rapidly increasing in the field of scientific and
engineering computations. In this book parallel computing techniques are
searched, developed and tested for two types of computer hardware
architectures. Results also show a great reduction in computation time.
The other type of computer hardware architecture which is of main
interest is Distributed Memory Processors (DMP). DSM systems require a
communication network to connect inter-processor memory (ordinary
computers). To parallelize programs on DMP we have to use Message
Passing Interface (MPI) compilers directives. Time reduction in parallel
programs can be seen over sequential programs in the results presented
in this book.