Artificial neural networks have been recognized as a powerful tool to
learn and reproduce systems in various fields of applications. Neural
net- works are inspired by the brain behavior and consist of one or
several layers of neurons, or computing units, connected by links. Each
artificial neuron receives an input value from the input layer or the
neurons in the previ- ous layer. Then it computes a scalar output from a
linear combination of the received inputs using a given scalar function
(the activation function), which is assumed the same for all neurons.
One of the main properties of neural networks is their ability to learn
from data. There are two types of learning: structural and parametric.
Structural learning consists of learning the topology of the network,
that is, the number of layers, the number of neurons in each layer, and
what neurons are connected. This process is done by trial and error
until a good fit to the data is obtained. Parametric learning consists
of learning the weight values for a given topology of the network. Since
the neural functions are given, this learning process is achieved by
estimating the connection weights based on the given information. To
this aim, an error function is minimized using several well known
learning methods, such as the backpropagation algorithm. Unfortunately,
for these methods: (a) The function resulting from the learning process
has no physical or engineering interpretation. Thus, neural networks are
seen as black boxes.