As computer technology develops, spoken dialogue is becoming ever-more
important when interacting with a wide variety of technological devices,
including Personal Digital Assistants, tablet PCs, and mobile phones.
Using speech leads to more natural and user-friendly interfaces. More
specifically, the authors of this volume contend that the experience of
talking to our computerized gadgets may be greatly improved by
dynamically adapting the system's dialogue interaction style to the
user's profile and emotional status. In this book, a novel approach that
combines speech-based emotion recognition with adaptive human-computer
dialogue modeling is described. With the robust recognition of emotions
from speech signals as their goal, the authors analyze the effectiveness
of using a plain emotion recognizer, a speech-emotion recognizer
combining speech and emotion recognition, and multiple speech-emotion
recognizers at the same time. The semi-stochastic dialogue model
employed relates user emotion management to the corresponding dialogue
interaction history and allows the device to adapt itself to the
context, including altering the stylistic realization of its speech.
This comprehensive volume begins by introducing spoken language dialogue
systems and providing an overview of human emotions, theories,
categorization and emotional speech. It moves on to cover the adaptive
semi-stochastic dialogue model and the basic concepts of speech-emotion
recognition. Finally, the authors show how speech-emotion recognizers
can be optimized, and how an adaptive dialogue manager can be
implemented. The book, with its novel methods to perform robust
speech-based emotion recognition at low complexity, will be of interest
to a variety of readers involved in human-computer interaction.