Muutke küpsiste eelistusi

E-raamat: Expressive Conversational-Behavior Generation Model For Advanced Interaction within Multimodal User Interfaces

  • Formaat: 248 pages
  • Ilmumisaeg: 01-Nov-2016
  • Kirjastus: Nova Science Publishers Inc
  • ISBN-13: 9781634840842
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 284,05 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: 248 pages
  • Ilmumisaeg: 01-Nov-2016
  • Kirjastus: Nova Science Publishers Inc
  • ISBN-13: 9781634840842
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The aim of the book is to represent a flexible and efficient algorithm and a novel system used for the planning, generation, and realization of conversational behavior (co-verbal behavior). Such behavior is best described as a set of moving body parts, which are meaningful. In terms of prosody, it is synchronized with the accompanying speech. The movement and shapes generated as a co-verbal behavior represent a contextual link between a repertoire of independent motor skills (shapes, movements, and poses that conversational agent can reproduce and execute), and the intent/meaning of spoken sequences (context). The actual intent/meaning of spoken content is identified through language-dependent linguistic markers and prosody. The knowledge databases used to determine the intent/meaning of text are based on the linguistic analysis and classification of the text into semiotic classes and subclasses achieved through annotation of multimodal corpora based on the proposed EVA annotation scheme. The scheme allows for capturing features at a functional (context-dependent), as well as at a descriptive (context-independent) level. The functional level captures high-level features that describe the correlation between speech and co-verbal behavior, whereas the descriptive level allows us to capture and define body-poses and shapes independently of verbal content and in high-resolution. The annotation scheme, therefore, not only interlinks speech and gesture at a semiotic level, but also serves as a basis for the creation of a context independent repertoire of movement and shapes. The process of generating the co-verbal behavior is, in this book, divided into two phases. The first phase deals with the classification of intent and its synchronization with the verbal content and prosody. The second phase then transforms the planned and synchronized behavior into a co-verbal animation performed by an embodied conversational agent (ECA). In order to be able to extrapolate intent from arbitrary text-sequences, the algorithm for the formulation of behavior deduces meaning/intent in regard to the semiotic intent. Furthermore, the algorithm considers the linguistic features of arbitrary and un-annotated text and select primitive gestures based on semiotic nuclei, as identified by semiotic classification and further modeled by the predicted prosodic features of speech to be generated by a general text-to-speech system (TTS). The output of the phase for formulation of behavior is represented as a hierarchical procedure encoded in XML format, and as a speech sequence generated by TTS. The procedural description is event-oriented and represents a well-defined structure of consecutive movements of body-parts, as well as of body-parts moving in parallel. The second phase of the novel architecture transforms the procedural descriptions into a series of coherent animations of individual parts of the articulated embodied conversational agent. In this regard a novel ECA-based realization framework named EVA-framework is also represented. It supports a real-time realization of procedural animation descriptions and plans on multi-part mesh-based models, by using skeletal animation, blend shape animation, and the animation of predefined (pre-recorded) animated segments. This book, therefore, considers a complete design and implementation of an expressive model for the generation of co-verbal behavior, which is able to transform un-annotated text into a speech-synchronized series of animated sequences.
Preface vii
Chapter 1 An Expressive Model for the Generation of Co-Verbal Behavior
1(18)
Introduction
1(3)
Embodied Conversational Agents and Management of Interaction
4(4)
Applications of Conversational Models and Agents
8(3)
The Types of Co-Verbal Models
11(3)
The Methods and Frameworks for the Realization of Co-Verbal Behavior
14(3)
Expressive Conversational Model EVA
17(2)
Chapter 2 A System for the Generation of Co-Verbal Conversational Behavior
19(12)
Introduction
19(3)
The Data Structures for the Expressive Speech Synthesizer
22(6)
The Language Resources of the Co-Verbal Behavior Generation System
28(3)
Chapter 3 The Annotation and Description of Co-Verbal Behavior
31(20)
The Methodology for Measuring the Reliability of the Annotation Schemes
34(1)
Tools for Tagging the Co-Verbal Behavior
35(2)
The EVA Scheme for Annotating Co-Verbal Behavior
37(14)
Chapter 4 Markup Language for the Specification of Shapes, Poses, and Behavior of Conversational Agents - EVA-Script --
51(10)
Introduction
51(1)
EVA-Script
52(9)
Chapter 5 Description of Motoric Capabilities of Conversational Agent
61(20)
How to Describe the Expressive Shapes and Poses
65(1)
How to Describe the Spatial Configurations of Body Parts
66(5)
How to Describe Trajectories
71(6)
How to Describe the Facial Expressions
77(4)
Chapter 6 Correlating Shapes and Semiotic Intent
81(12)
The Classification of the Intent Using the Semiotic Grammar
81(6)
Relating Shapes and Intent through the Co-Verbal Behavior Lexicon - Gesticon
87(6)
Chapter 7 Algorithm for the Automatic Generation of Expressive Co-Verbal Behavior
93(60)
Introduction
93(60)
Phase I The Classification of the Intent
95(4)
Phase II The Planning of the Intent
99(13)
Phase III The Planning of the Movement
112(15)
Phase IV The Synchronization of the Poses
127(19)
Phase V The Generation of Co-Verbal Behavior G
146(7)
Chapter 8 The Framework for the Realization of Co-Verbal Behavior on the Conversational Agent - EVA-framework -
153(16)
Introduction
153(1)
Multi-Part 3D Model of the Conversational Agent - EVA
154(3)
The Realization of a Procedural Description of Co-Verbal Behavior
157(4)
The Realization of Facial Expressions and Emotions
161(4)
The Realization of Head Gestures and Gaze
165(3)
The Realization of Subconscious Behavior
168(1)
Chapter 9 The Expressive Model and Multimodal User Interfaces
169(20)
The MWP Platform
170(12)
The Multimodal Interface for UMB-SmartTV System
182(1)
Expressive Co-Verbal Behavior Generation for Artificial Bodies
183(6)
Chapter 10 The Embodied Conversational Agent EVA
189(26)
Introduction
189(3)
The Synthesis of Basic and Complex Emotions
192(13)
The Synthesis of Co-Verbal Behavior of Gestures with Arms, Hands and Head in EVA-Framework
205(6)
Evaluating Expressive Co-Verbal Behavior
211(4)
References 215(14)
Index 229