Whether the results of statistical procedures are accepted or not is
strongly influenced by the way how they are interpreted. Effects based
on data encodings and the order in which the data occurs in the learning
sample are particularly problematic. Modern data collections often
contain large numbers of items, each including many variables. These
variables are usually measured on different scales among which the
ordinal scale is the most common. Versatile and efficient data analysis
models are required for mining these data. Multi-layer Perceptron (MLP)
Networks are very flexible models for analyzing problems that have an
input-output structure. These techniques are well-known in artificial
intelligence and provide models for non-linear statistical regression
and classification with efficient learning algorithms. The author of
this thesis develops extensions to MLP networks suitable for the
appropriate analysis of ordinal data occurring both as inputs and
outputs. Reviewing the learning procedure he introduces a new learning
paradigm that combines the advantages of batch learning on the one hand
and incremental estimation on the other, i.e. statistically better
results and algorithmic efficiency respectively. This allows an
efficient online adaptation of the model without being compromised by
the dependence on either a learning parameter or the ordering of the
data set. This book addresses researchers, lecturers and students of
mathematics, informatics and artificial intelligence. It may also be
interesting for those who deal with data analysis in their daily work.