|
|
xv | |
|
|
xix | |
Preface |
|
xxi | |
Acknowledgment |
|
xxv | |
Authors |
|
xxvii | |
|
|
1 | (8) |
|
|
1 | (1) |
|
1.1 Introduction: Human--computer interface for people with disabilities |
|
|
1 | (1) |
|
|
2 | (1) |
|
|
3 | (2) |
|
|
5 | (1) |
|
|
6 | (3) |
|
|
8 | (1) |
|
2 Human--computer interface: Mechanical sensors |
|
|
9 | (14) |
|
|
9 | (1) |
|
|
9 | (1) |
|
|
10 | (4) |
|
|
10 | (1) |
|
|
10 | (1) |
|
2.2.3 Modified computer display or digital tablet |
|
|
11 | (1) |
|
2.2.4 Special purpose interface devices |
|
|
12 | (1) |
|
|
12 | (1) |
|
|
13 | (1) |
|
|
13 | (1) |
|
|
14 | (5) |
|
2.3.1 Inertial measurement unit |
|
|
14 | (1) |
|
|
15 | (1) |
|
|
16 | (1) |
|
|
17 | (1) |
|
2.3.4.1 Example of smart glove |
|
|
18 | (1) |
|
2.3.4.2 Computer connectivity specifics |
|
|
18 | (1) |
|
2.4 Applications of HCI based on mechanical sensors |
|
|
19 | (1) |
|
2.4.1 Shortcomings of HCI based on mechanical sensors |
|
|
20 | (1) |
|
2.5 Current research and future improvements |
|
|
20 | (3) |
|
|
20 | (3) |
|
3 Brain--computer interface based on thought waves |
|
|
23 | (20) |
|
|
23 | (1) |
|
|
24 | (1) |
|
3.2 History of brain--computer interface |
|
|
25 | (3) |
|
3.3 Significance of BCI devices |
|
|
28 | (1) |
|
|
29 | (2) |
|
|
29 | (1) |
|
|
29 | (1) |
|
|
30 | (1) |
|
|
31 | (2) |
|
3.5.1 Invasive thought translation device BCI |
|
|
32 | (1) |
|
3.5.2 Invasive sensory BCI |
|
|
32 | (1) |
|
3.5.3 Noninvasive thought translation device BCI |
|
|
32 | (1) |
|
|
33 | (1) |
|
3.6.1 Improving signal quality |
|
|
33 | (1) |
|
3.6.2 EEG feature selection |
|
|
34 | (1) |
|
3.7 BCI translation algorithms |
|
|
34 | (1) |
|
|
35 | (1) |
|
|
35 | (2) |
|
3.9.1 Applications of invasive TTD |
|
|
36 | (1) |
|
3.9.2 Applications of noninvasive TTD |
|
|
36 | (1) |
|
|
37 | (1) |
|
|
38 | (1) |
|
3.12 Ethical consideration |
|
|
39 | (4) |
|
|
40 | (3) |
|
4 Evoked potentials-based brain--computer interface |
|
|
43 | (14) |
|
|
43 | (1) |
|
|
43 | (2) |
|
4.2 Brain--computer interface (BCI) systems based on steady-state visual evoke potential |
|
|
45 | (5) |
|
|
46 | (1) |
|
|
46 | (1) |
|
4.2.3 Main controller/interface board |
|
|
47 | (1) |
|
|
48 | (2) |
|
4.3 Design challenges and limitations |
|
|
50 | (1) |
|
|
51 | (2) |
|
4.5 User benefits and improvements |
|
|
53 | (4) |
|
|
54 | (3) |
|
5 Myoelectric-based hand gesture recognition for human--computer interface applications |
|
|
57 | (20) |
|
|
57 | (1) |
|
|
57 | (1) |
|
|
58 | (2) |
|
5.2.1 Identification of individual finger actions |
|
|
58 | (2) |
|
5.2.2 Identification of hand grips |
|
|
60 | (1) |
|
5.3 Current technologies and implementation |
|
|
60 | (17) |
|
5.3.1 Individual finger movements |
|
|
60 | (2) |
|
5.3.1.1 Techniques to analyze the data |
|
|
62 | (2) |
|
|
64 | (1) |
|
|
64 | (3) |
|
5.3.2 Hand/finger grip movements |
|
|
67 | (2) |
|
5.3.2.1 Techniques to identify grip patterns |
|
|
69 | (5) |
|
|
74 | (3) |
|
6 Video-based hand movement for human--computer interface |
|
|
77 | (18) |
|
|
77 | (1) |
|
|
77 | (4) |
|
6.1.1 Hand-action recognition using marker-based video |
|
|
78 | (1) |
|
6.1.2 Hand-action recognition using marker-less video approach |
|
|
79 | (2) |
|
|
81 | (6) |
|
6.2.1 Motion image estimation |
|
|
81 | (1) |
|
|
82 | (1) |
|
|
82 | (1) |
|
6.2.2.2 Discrete wavelet transform (DWT) |
|
|
83 | (1) |
|
6.2.2.3 Discrete stationary wavelet transform |
|
|
84 | (1) |
|
6.2.3 Features of temporal history template (THT) |
|
|
84 | (2) |
|
6.2.3.1 Feature classification |
|
|
86 | (1) |
|
|
86 | (1) |
|
|
87 | (1) |
|
|
87 | (2) |
|
|
89 | (1) |
|
|
89 | (1) |
|
|
90 | (1) |
|
|
90 | (5) |
|
|
91 | (4) |
|
7 Human--computer interface based on electrooculography |
|
|
95 | (22) |
|
|
95 | (1) |
|
|
95 | (1) |
|
|
96 | (5) |
|
|
96 | (2) |
|
7.2.2 Electrooculography (EOG) |
|
|
98 | (1) |
|
7.2.3 Human eye anatomy: Movement |
|
|
99 | (2) |
|
7.3 Current technologies: Historical to state of the art |
|
|
101 | (3) |
|
7.3.1 System requirements |
|
|
103 | (1) |
|
7.4 Example of EOG-based system |
|
|
104 | (5) |
|
|
104 | (1) |
|
|
104 | (2) |
|
7.4.3 Experimental protocol |
|
|
106 | (1) |
|
7.4.4 Experimental procedure |
|
|
107 | (2) |
|
|
109 | (1) |
|
|
110 | (2) |
|
7.7 Limitations of the study |
|
|
112 | (1) |
|
7.8 Discussion: User benefits and limitations |
|
|
113 | (4) |
|
|
115 | (1) |
|
|
116 | (1) |
|
8 Video-based eye tracking |
|
|
117 | (18) |
|
|
117 | (1) |
|
|
117 | (2) |
|
8.2 Background and history |
|
|
119 | (3) |
|
8.3 An example eye-tracking method |
|
|
122 | (4) |
|
|
122 | (1) |
|
|
123 | (1) |
|
|
124 | (1) |
|
8.3.4 Problems with eye-tracking technique |
|
|
125 | (1) |
|
|
126 | (2) |
|
|
126 | (2) |
|
|
128 | (1) |
|
8.6 Discussion: User benefits and limitations |
|
|
129 | (6) |
|
|
132 | (3) |
|
9 Speech for controlling computers |
|
|
135 | (26) |
|
|
135 | (1) |
|
|
135 | (1) |
|
9.2 History of speech-based machine commands |
|
|
136 | (1) |
|
9.3 Automatic speech recognition (ASR) |
|
|
137 | (1) |
|
9.3.1 Digitization of sound signal |
|
|
137 | (1) |
|
9.4 Speech denoising methods |
|
|
138 | (4) |
|
9.4.1 Spectral-based filtering |
|
|
139 | (1) |
|
|
139 | (1) |
|
|
139 | (1) |
|
9.4.4 Anti-aliasing filter |
|
|
140 | (1) |
|
|
140 | (1) |
|
|
140 | (1) |
|
|
141 | (1) |
|
9.4.8 Mean and median filter |
|
|
141 | (1) |
|
|
141 | (1) |
|
|
142 | (1) |
|
9.5 Speech analysis fundamentals |
|
|
142 | (1) |
|
9.6 Subsections of speech: Phonemes |
|
|
143 | (1) |
|
|
144 | (1) |
|
|
144 | (1) |
|
9.7 How people speak: Speech production model |
|
|
144 | (2) |
|
9.8 Place principle hearing model |
|
|
146 | (1) |
|
9.9 Features selection for speech analysis |
|
|
147 | (3) |
|
9.9.1 Power spectral analysis |
|
|
148 | (1) |
|
9.9.2 Linear predictive coding (LPC) |
|
|
148 | (1) |
|
|
148 | (1) |
|
|
149 | (1) |
|
9.10 Speech feature classification |
|
|
150 | (1) |
|
9.11 Artificial neural networks |
|
|
151 | (3) |
|
9.11.1 Support vector machine |
|
|
152 | (1) |
|
9.11.2 Hidden Markov model |
|
|
152 | (1) |
|
|
153 | (1) |
|
9.12 Limitations in current systems |
|
|
154 | (7) |
|
9.12.1 Recent developments |
|
|
155 | (1) |
|
9.12.1.1 Speaker source separation |
|
|
155 | (3) |
|
9.12.1.2 Audio and visual fusion |
|
|
158 | (1) |
|
|
159 | (2) |
|
10 Lip movement for human--computer interface |
|
|
161 | (16) |
|
|
161 | (1) |
|
10.1 Introduction: History and applications |
|
|
161 | (2) |
|
10.2 Current technologies |
|
|
163 | (6) |
|
10.2.1 Video-based speech analyzer |
|
|
163 | (1) |
|
10.2.1.1 Video processing to segment facial movement |
|
|
164 | (1) |
|
10.2.1.2 Issues related to the facial movement segmentation |
|
|
165 | (1) |
|
10.2.1.3 Extraction of visual speech features |
|
|
166 | (2) |
|
10.2.2 Speech recognition based on facial muscle activity |
|
|
168 | (1) |
|
10.2.2.1 Face movement related to speech |
|
|
168 | (1) |
|
10.2.2.2 Features of SEMG |
|
|
169 | (1) |
|
|
169 | (1) |
|
10.3.1 Video-based voiceless speech recognition |
|
|
169 | (1) |
|
10.3.2 EMG-based voiceless speech recognition |
|
|
170 | (1) |
|
10.4 Example of voiceless speech recognition systems |
|
|
170 | (5) |
|
10.4.1 Video data acquisition and processing |
|
|
170 | (1) |
|
10.4.2 Visual speech recognizer |
|
|
171 | (2) |
|
10.4.3 Experiments using facial muscle activity signals |
|
|
173 | (1) |
|
10.4.3.1 Facial EMG recording and preprocessing |
|
|
174 | (1) |
|
|
174 | (1) |
|
10.4.3.3 Classification of visual and facial EMG features |
|
|
174 | (1) |
|
10.5 Discussion: User benefits |
|
|
175 | (1) |
|
|
176 | (1) |
References |
|
177 | (4) |
Index |
|
181 | |