Muutke küpsiste eelistusi

E-raamat: Emotion Recognition: A Pattern Analysis Approach

  • Formaat: PDF+DRM
  • Ilmumisaeg: 10-Dec-2014
  • Kirjastus: John Wiley & Sons Inc
  • Keel: eng
  • ISBN-13: 9781118910610
  • Formaat - PDF+DRM
  • Hind: 145,67 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Raamatukogudele
  • Formaat: PDF+DRM
  • Ilmumisaeg: 10-Dec-2014
  • Kirjastus: John Wiley & Sons Inc
  • Keel: eng
  • ISBN-13: 9781118910610

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

A timely book containing foundations and current research directions on emotion recognition by facial expression, voice, gesture and biopotential signals

This book provides a comprehensive examination of the research methodology of different modalities of emotion recognition. Key topics of discussion include facial expression, voice and biopotential signal-based emotion recognition. Special emphasis is given to feature selection, feature reduction, classifier design and multi-modal fusion to improve performance of emotion-classifiers.

Written by several experts, the book includes several tools and techniques, including dynamic Bayesian networks, neural nets, hidden Markov model, rough sets, type-2 fuzzy sets, support vector machines and their applications in emotion recognition by different modalities. The book ends with a discussion on emotion recognition in automotive fields to determine stress and anger of the drivers, responsible for degradation of their performance and driving-ability.

There is an increasing demand of emotion recognition in diverse fields, including psycho-therapy, bio-medicine and security in government, public and private agencies. The importance of emotion recognition has been given priority by industries including Hewlett Packard in the design and development of the next generation human-computer interface (HCI) systems.

Emotion Recognition: A Pattern Analysis Approach would be of great interest to researchers, graduate students and practitioners, as the book





Offers both foundations and advances on emotion recognition in a single volume Provides a thorough and insightful introduction to the subject by utilizing computational tools of diverse domains Inspires young researchers to prepare themselves for their own research Demonstrates direction of future research through new technologies, such as Microsoft Kinect, EEG systems etc.
Preface xix
Acknowledgments xxvii
Contributors xxix
1 Introduction to Emotion Recognition
1(46)
Amit Konar
Anisha Halder
Aruna Chakraborty
1.1 Basics of Pattern Recognition
1(1)
1.2 Emotion Detection as a Pattern Recognition Problem
2(1)
1.3 Feature Extraction
3(12)
1.3.1 Facial Expression-Based Features
3(4)
1.3.2 Voice Features
7(2)
1.3.3 EEG Features Used for Emotion Recognition
9(2)
1.3.4 Gesture- and Posture-Based Emotional Features
11(1)
1.3.5 Multimodal Features
12(3)
1.4 Feature Reduction Techniques
15(2)
1.4.1 Principal Component Analysis
15(1)
1.4.2 Independent Component Analysis
16(1)
1.4.3 Evolutionary Approach to Nonlinear Feature Reduction
16(1)
1.5 Emotion Classification
17(7)
1.5.1 Neural Classifier
17(4)
1.5.2 Fuzzy Classifiers
21(1)
1.5.3 Hidden Markov Model Based Classifiers
22(1)
1.5.4 k-Nearest Neighbor Algorithm
22(1)
1.5.5 Naive Bayes Classifier
23(1)
1.6 Multimodal Emotion Recognition
24(1)
1.7 Stimulus Generation for Emotion Arousal
24(2)
1.8 Validation Techniques
26(1)
1.8.1 Performance Metrics for Emotion Classification
27(1)
1.9 Summary
27(20)
References
28(16)
Author Biographies
44(3)
2 Exploiting Dynamic Dependencies Among Action Units for Spontaneous Facial Action Recognition
47(22)
Yan Tong
Qiang Ji
2.1 Introduction
48(1)
2.2 Related Work
49(1)
2.3 Modeling the Semantic and Dynamic Relationships Among AUs With a DBN
50(10)
2.3.1 A DBN for Modeling Dynamic Dependencies among AUs
51(3)
2.3.2 Constructing the Initial DBN
54(1)
2.3.3 Learning DBN Model
55(4)
2.3.4 AU Recognition Through DBN Inference
59(1)
2.4 Experimental Results
60(4)
2.4.1 Facial Action Unit Databases
60(1)
2.4.2 Evaluation on Cohn and Kanade Database
61(1)
2.4.3 Evaluation on Spontaneous Facial Expression Database
62(2)
2.5 Conclusion
64(5)
References
64(2)
Author Biographies
66(3)
3 Facial Expressions: A Cross-Cultural Study
69(20)
Chandrani Saha
Washef Ahmed
Soma Mitra
Debasis Mazumdar
Sushmita Mitra
3.1 Introduction
69(2)
3.2 Extraction of Facial Regions and Ekman's Action Units
71(5)
3.2.1 Computation of Optical Flow Vector Representing Muscle Movement
72(1)
3.2.2 Computation of Region of Interest
73(1)
3.2.3 Computation of Feature Vectors Within ROI
74(1)
3.2.4 Facial Deformation and Ekman's Action Units
75(1)
3.3 Cultural Variation in Occurrence of Different AUs
76(3)
3.4 Classification Performance Considering Cultural Variability
79(5)
3.5 Conclusion
84(5)
References
84(2)
Author Biographies
86(3)
4 A Subject-Dependent Facial Expression Recognition System
89(24)
Chuan-Yu Chang
Yan-Chiang Huang
4.1 Introduction
89(2)
4.2 Proposed Method
91(12)
4.2.1 Face Detection
91(1)
4.2.2 Preprocessing
92(3)
4.2.3 Facial Feature Extraction
95(3)
4.2.4 Face Recognition
98(1)
4.2.5 Facial Expression Recognition
99(4)
4.3 Experiment Result
103(6)
4.3.1 Parameter Determination of the RBFNN
105(2)
4.3.2 Comparison of Facial Features
107(1)
4.3.3 Comparison of Face Recognition Using "Inner Face" and Full Face
108(1)
4.3.4 Comparison of Subject-Dependent and Subject-Independent Facial Expression Recognition Systems
108(1)
4.3.5 Comparison with Other Approaches
109(1)
4.4 Conclusion
109(4)
Acknowledgment
110(1)
References
110(2)
Author Biographies
112(1)
5 Facial Expression Recognition Using Independent Component Features and Hidden Markov Model
113(16)
Md. Zia Uddin
Tae-Seong Kim
5.1 Introduction
114(1)
5.2 Methodology
115(8)
5.2.1 Expression Image Preprocessing
115(1)
5.2.2 Feature Extraction
116(5)
5.2.3 Codebook and Code Generation
121(1)
5.2.4 Expression Modeling and Training Using HMM
121(2)
5.3 Experimental Results
123(2)
5.4 Conclusion
125(4)
Acknowledgments
125(1)
References
126(1)
Author Biographies
127(2)
6 Feature Selection for Facial Expression Based on Rough Set Theory
129(18)
Yong Yang
Guoyin Wang
6.1 Introduction
129(2)
6.2 Feature Selection for Emotion Recognition Based on Rough Set Theory
131(6)
6.2.1 Basic Concepts of Rough Set Theory
131(2)
6.2.2 Feature Selection Based on Rough Set and Domain-Oriented Data-Driven Data Mining Theories
133(3)
6.2.3 Attribute Reduction for Emotion Recognition
136(1)
6.3 Experiment Results and Discussion
137(6)
6.3.1 Experiment Condition
137(2)
6.3.2 Experiments for Feature Selection Method for Emotion Recognition
139(2)
6.3.3 Experiments for the Features Concerning Mouth for Emotion Recognition
141(2)
6.4 Conclusion
143(4)
Acknowledgments
143(1)
References
143(2)
Author Biographies
145(2)
7 Emotion Recognition from Facial Expressions Using Type-2 Fuzzy Sets
147(36)
Anisha Halder
Amit Konar
Aruna Chakraborty
Atulya K. Nagar
7.1 Introduction
148(2)
7.2 Preliminaries on Type-2 Fuzzy Sets
150(2)
7.2.1 Type-2 Fuzzy Sets
150(2)
7.3 Uncertainty Management in Fuzzy-Space for Emotion Recognition
152(5)
7.3.1 Principles Used in the IT2FS Approach
153(2)
7.3.2 Principles Used in the GT2FS Approach
155(1)
7.3.3 Methodology
156(1)
7.4 Fuzzy Type-2 Membership Evaluation
157(4)
7.5 Experimental Details
161(6)
7.5.1 Feature Extraction
161(3)
7.5.2 Creating the Type-2 Fuzzy Face-Space
164(1)
7.5.3 Emotion Recognition of an Unknown Facial Expression
165(2)
7.6 Performance Analysis
167(8)
7.6.1 The McNemar's Test
169(2)
7.6.2 Friedman Test
171(2)
7.6.3 The Confusion Matrix-Based RMS Error
173(2)
7.7 Conclusion
175(8)
References
176(4)
Author Biographies
180(3)
8 Emotion Recognition from Non-frontal Facial Images
183(32)
Wenming Zheng
Hao Tang
Thomas S. Huang
8.1 Introduction
184(3)
8.2 A Brief Review of Automatic Emotional Expression Recognition
187(4)
8.2.1 Framework of Automatic Facial Emotion Recognition System
187(2)
8.2.2 Extraction of Geometric Features
189(1)
8.2.3 Extraction of Appearance Features
190(1)
8.3 Databases for Non-frontal Facial Emotion Recognition
191(5)
8.3.1 BU-3DFE Database
192(2)
8.3.2 BU-4DFE Database
194(1)
8.3.3 CMU Multi-PIE Database
195(1)
8.3.4 Bosphorus 3D Database
195(1)
8.4 Recent Advances of Emotion Recognition from Non-Frontal Facial Images
196(9)
8.4.1 Emotion Recognition from 3D Facial Models
196(1)
8.4.2 Emotion Recognition from Non-frontal 2D Facial Images
197(8)
8.5 Discussions and Conclusions
205(10)
Acknowledgments
206(1)
References
206(5)
Author Biographies
211(4)
9 Maximum a Posteriori Based Fusion Method for Speech Emotion Recognition
215(22)
Ling Cen
Zhu Liang Yu
Wee Ser
9.1 Introduction
216(3)
9.2 Acoustic Feature Extraction for Emotion Recognition
219(4)
9.3 Proposed Map-Based Fusion Method
223(6)
9.3.1 Base Classifiers
224(1)
9.3.2 MAP-Based Fusion
225(1)
9.3.3 Addressing Small Training Dataset Problem---Calculation of ƒc|CL(cr)
226(2)
9.3.4 Training and Testing Procedure
228(1)
9.4 Experiment
229(3)
9.4.1 Database
229(1)
9.4.2 Experiment Description
229(1)
9.4.3 Results and Discussion
230(2)
9.5 Conclusion
232(5)
References
232(2)
Author Biographies
234(3)
10 Emotion Recognition in Naturalistic Speech and Language---A Survey
237(32)
Felix Weninger
Martin Wollmer
Bjorn Schuller
10.1 Introduction
238(1)
10.2 Tasks and Applications
239(5)
10.2.1 Use-Cases for Automatic Emotion Recognition from Speech and Language
239(2)
10.2.2 Databases
241(1)
10.2.3 Modeling and Annotation: Categories versus Dimensions
242(1)
10.2.4 Unit of Analysis
243(1)
10.3 Implementation and Evaluation
244(9)
10.3.1 Feature Extraction
245(2)
10.3.2 Feature and Instance Selection
247(1)
10.3.3 Classification and Learning
248(2)
10.3.4 Partitioning and Evaluation
250(2)
10.3.5 Research Toolkits and Open-Source Software
252(1)
10.4 Challenges
253(4)
10.4.1 Non-prototypicality, Reliability, and Class Sparsity
253(2)
10.4.2 Generalization
255(1)
10.4.3 Real-Time Processing
256(1)
10.4.4 Acoustic Environments: Noise and Reverberation
256(1)
10.5 Conclusion and Outlook
257(12)
Acknowledgment
259(1)
References
259(8)
Author Biographies
267(2)
11 EEC-Based Emotion Recognition Using Advanced Signal Processing Techniques
269(26)
Panagiotis C. Petrantonakis
Leontios J. Hadjileontiadis
11.1 Introduction
270(1)
11.2 Brain Activity and Emotions
271(1)
11.3 EEG-ER Systems: An Overview
272(1)
11.4 Emotion Elicitation
273(2)
11.4.1 Discrete Emotions
273(1)
11.4.2 Affective States
274(1)
11.4.3 Datasets
274(1)
11.5 Advanced Signal Processing in EEG-ER
275(12)
11.5.1 Discrete Emotions
275(5)
11.5.2 Affective States
280(7)
11.6 Concluding Remarks and Future Directions
287(8)
References
289(3)
Author Biographies
292(3)
12 Frequency Band Localization on Multiple Physiological Signals for Human Emotion Classification Using DWT
295(20)
M. Murugappan
12.1 Introduction
296(1)
12.2 Related Work
297(2)
12.3 Research Methodology
299(7)
12.3.1 Physiological Signals Acquisition
299(3)
12.3.2 Preprocessing and Normalization
302(1)
12.3.3 Feature Extraction
303(2)
12.3.4 Emotion Classification
305(1)
12.4 Experimental Results and Discussions
306(4)
12.5 Conclusion
310(1)
12.6 Future Work
310(5)
Acknowledgments
310(1)
References
310(2)
Author Biography
312(3)
13 Toward Affective Brain-Computer Interface: Fundamentals and Analysis of EEG-Based Emotion Classification
315(28)
Yuan-Pin Lin
Tzyy-Ping Jung
Yijun Wang
Julie Onton
13.1 Introduction
316(7)
13.1.1 Brain--Computer Interface
316(1)
13.1.2 EEG Dynamics Associated with Emotion
317(2)
13.1.3 Current Research in EEG-Based Emotion Classification
319(3)
13.1.4 Addressed Issues
322(1)
13.2 Materials and Methods
323(4)
13.2.1 EEG Dataset
323(1)
13.2.2 EEG Feature Extraction
323(2)
13.2.3 EEG Feature Selection
325(1)
13.2.4 EEG Feature Classification
325(2)
13.3 Results and Discussion
327(5)
13.3.1 Superiority of Differential Power Asymmetry
327(1)
13.3.2 Gender Independence in Differential Power Asymmetry
328(2)
13.3.3 Channel Reduction from Differential Power Asymmetry
330(1)
13.3.4 Generalization of Differential Power Asymmetry
331(1)
13.4 Conclusion
332(1)
13.5 Issues and Challenges Toward ABCIs
332(11)
13.5.1 Directions for Improving Estimation Performance
333(1)
13.5.2 Online System Implementation
334(2)
Acknowledgments
336(1)
References
336(4)
Author Biographies
340(3)
14 Bodily Expression for Automatic Affect Recognition
343(36)
Hatice Gunes
Caifeng Shan
Shizhi Chen
YingLi Tian
14.1 Introduction
344(1)
14.2 Background and Related Work
345(8)
14.2.1 Body as an Autonomous Channel for Affect Perception and Analysis
346(4)
14.2.2 Body as an Additional Channel for Affect Perception and Analysis
350(2)
14.2.3 Bodily Expression Data and Annotation
352(1)
14.3 Creating a Database of Facial and Bodily Expressions: The FABO Database
353(3)
14.4 Automatic Recognition of Affect from Bodily Expressions
356(5)
14.4.1 Body as an Autonomous Channel for Affect Analysis
356(2)
14.4.2 Body as an Additional Channel for Affect Analysis
358(3)
14.5 Automatic Recognition of Bodily Expression Temporal Dynamics
361(6)
14.5.1 Feature Extraction
362(2)
14.5.2 Feature Representation and Combination
364(1)
14.5.3 Experiments
365(2)
14.6 Discussion and Outlook
367(2)
14.7 Conclusions
369(10)
Acknowledgments
370(1)
References
370(5)
Author Biographies
375(4)
15 Building a Robust System for Multimodal Emotion Recognition
379(32)
Johannes Wagner
Florian Lingenfelser
Elisabeth Andre
15.1 Introduction
380(1)
15.2 Related Work
381(1)
15.3 The Callas Expressivity Corpus
382(4)
15.3.1 Segmentation of Data
383(1)
15.3.2 Emotion Modeling
383(1)
15.3.3 Annotation
384(2)
15.4 Methodology
386(4)
15.4.1 Classification Model
386(1)
15.4.2 Feature Extraction
387(1)
15.4.3 Speech Features
387(2)
15.4.4 Facial Features
389(1)
15.4.5 Feature Selection
389(1)
15.4.6 Recognizing Missing Data
390(1)
15.5 Multisensor Data Fusion
390(5)
15.5.1 Feature-Level Fusion
390(1)
15.5.2 Ensemble-Based Systems and Decision-Level Fusion
391(4)
15.6 Experiments
395(4)
15.6.1 Evaluation Method
396(1)
15.6.2 Results
396(1)
15.6.3 Discussion
397(1)
15.6.4 Contradictory Cues
397(2)
15.7 Online Recognition System
399(4)
15.7.1 Social Signal Interpretation
399(1)
15.7.2 Synchronized Data Recording and Annotation
400(1)
15.7.3 Feature Extraction and Model Training
401(1)
15.7.4 Online Classification
401(2)
15.8 Conclusion
403(8)
Acknowledgment
404(1)
References
404(6)
Author Biographies
410(1)
16 Semantic Audiovisual Data Fusion for Automatic Emotion Recognition
411(26)
Dragos Datcu
Leon J. M. Rothkrantz
16.1 Introduction
412(1)
16.2 Related Work
413(3)
16.3 Data Set Preparation
416(2)
16.4 Architecture
418(13)
16.4.1 Classification Model
418(1)
16.4.2 Emotion Estimation from Speech
419(1)
16.4.3 Video Analysis
420(8)
16.4.4 Fusion Model
428(3)
16.5 Results
431(1)
16.6 Conclusion
432(5)
References
432(2)
Author Biographies
434(3)
17 A Multilevel Fusion Approach for Audiovisual Emotion Recognition
437(24)
Girija Chetty
Michael Wagner
Roland Goecke
17.1 Introduction
437(1)
17.2 Motivation and Background
438(2)
17.3 Facial Expression Quantification
440(4)
17.4 Experiment Design
444(6)
17.4.1 Data Corpora
444(1)
17.4.2 Facial Deformation Features
445(2)
17.4.3 Marker-Based Audio Visual Features
447(1)
17.4.4 Expression Classification and Multilevel Fusion
448(2)
17.5 Experimental Results and Discussion
450(6)
17.5.1 Facial Expression Quantification
450(1)
17.5.2 Facial Expression Classification Using SVDF and VDF Features
451(1)
17.5.3 Audiovisual Fusion Experiments
451(5)
17.6 Conclusion
456(5)
References
456(3)
Author Biographies
459(2)
18 From a Discrete Perspective of Emotions to Continuous, Dynamic, and Multimodal Affect Sensing
461(32)
Isabelle Hupont
Sergio Ballano
Eva Cerezo
Sandra Baldassarri
18.1 Introduction
462(3)
18.2 A Novel Method for Discrete Emotional Classification of Facial Images
465(4)
18.2.1 Selection and Extraction of Facial Inputs
465(2)
18.2.2 Classifiers Selection and Combination
467(1)
18.2.3 Results
468(1)
18.3 A 2D Emotional Space for Continuous and Dynamic Facial Affect Sensing
469(5)
18.3.1 Facial Expressions Mapping to the Whissell Affective Space
469(4)
18.3.2 From Still Images to Video Sequences through 2D Emotional Kinematics Modeling
473(1)
18.4 Expansion to Multimodal Affect Sensing
474(5)
18.4.1 Step 1: 2D Emotional Mapping to the Whissell Space
477(1)
18.4.2 Step 2: Temporal Fusion of Individual Modalities to Obtain a Continuous 2D Emotional Path
477(1)
18.4.3 Step 3: "Emotional Kinematics" Path Filtering
478(1)
18.5 Building Tools That Care
479(7)
18.5.1 T-EDUCO: A T-learning Tutoring Tool
479(3)
18.5.2 Multimodal Fusion Application to Instant Messaging
482(4)
18.6 Concluding Remarks and Future Work
486(7)
Acknowledgments
488(1)
References
488(3)
Author Biographies
491(2)
19 Audiovisual Emotion Recognition Using Semi-Coupled Hidden Markov Model with State-Based Alignment Strategy
493(22)
Chung-Hsien Wu
Jen-Chun Lin
Wen-Li Wei
19.1 Introduction
494(1)
19.2 Feature Extraction
495(5)
19.2.1 Facial Feature Extraction
496(2)
19.2.2 Prosodic Feature Extraction
498(2)
19.3 Semi-Coupled Hidden Markov Model
500(4)
19.3.1 Model Formulation
500(2)
19.3.2 State-Based Bimodal Alignment Strategy
502(2)
19.4 Experiments
504(4)
19.4.1 Data Collection
504(2)
19.4.2 Experimental Results
506(2)
19.5 Conclusion
508(7)
References
509(3)
Author Biographies
512(3)
20 Emotion Recognition in Car Industry
515(30)
Christos D. Katsis
George Rigas
Yorgos Goletsis
Dimitrios I. Fotiadis
20.1 Introduction
516(1)
20.2 An Overview of Application for the Car Industry
517(1)
20.3 Modality-Based Categorization
517(3)
20.3.1 Video-Image-Based Emotion Recognition
518(1)
20.3.2 Speech Based Emotion Recognition
518(1)
20.3.3 Biosignal-Based Emotion Recognition
519(1)
20.3.4 Multimodal Based Emotion Recognition
519(1)
20.4 Emotion-Based Categorization
520(3)
20.4.1 Stress
520(1)
20.4.2 Fatigue
521(1)
20.4.3 Confusion and Nervousness
522(1)
20.4.4 Distraction
522(1)
20.5 Two Exemplar Cases
523(13)
20.5.1 AUBADE
523(7)
20.5.2 I-Way
530(5)
20.5.3 Results
535(1)
20.6 Open Issues and Future Steps
536(1)
20.7 Conclusion
537(8)
References
537(6)
Author Biographies
543(2)
Index 545
Amit Konar is a Professor of Electronics and Tele-Communication Engineering, Jadavpur University, India, where he offers graduate-level courses on Artificial Intelligence and directs research in Cognitive Science, Robotics and Human-Computer Interfaces. Dr. Konar is the recipient of many prestigious grants and awards and is an author of 10 books and over 350 research publications. He offered consultancy services to Government and private industries. He served editorial services to many journals, including IEEE Transactions on Systems, Man and Cybernetics (Part-A) and IEEE Transactions on Fuzzy Systems.

Aruna Chakraborty is an Associate Professor with the Department of Computer Science and Engineering, St. Thomas' College of Engineering and Technology, India. She is also a Visiting Faculty with Jadavpur University, where she offers graduate-level courses on Intelligent Automation and Robotics, and Cognitive Science. Her research interest includes human-computer interfaces, emotional intelligence and reasoning with fuzzy logic.