Muutke küpsiste eelistusi

E-raamat: Human-Computer Interface Technologies for the Motor Impaired

(RMIT University, Melbourne, Australia), (RMIT University, Melbourne, Australia)
  • Formaat - EPUB+DRM
  • Hind: 64,99 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Raamatukogudele

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Human Computer Interface Technologies for the Motor Impaired examines both the technical and social aspects of human computer interface (HCI). Written by world-class academic experts committed to improving HCI technologies for people with disabilities, this all-inclusive book explores the latest research, and offers insight into the current limitations of this field. It introduces the concept of HCI, identifies and describes the fundamentals associated with a specific technology of HCI, and provides examples for each. It also lists and highlights the different modalities (video, speech, mechanical, myoelectric, electro-oculogram, and brain-waves) that are available, and discusses their relevant applications.

Easily and readily understood by researchers, engineers, clinicians, and the common layperson, the book describes a number of HCI technologies ranging from simple modification of the computer mouse and joystick to a braincomputer interface (BCI) that uses the electrical recording of the brain activity of the user. The text includes photographs or illustrations for each device, as well as references at the end of each chapter for further study.

In addition, this book:











Describes the mechanical sensors that are used as an interface to control a computer or screen for the aged and disabled Discusses the BCI using brain waves recorded by noninvasive electrodes to recognize the command from the user Presents the myoelectric interface for controlling devices such as the prosthetic/robotic hand Explains the technology of tracking the eye gaze using video Provides the fundamentals of voice recognition technologies for computer and machine control applications Examines a secure and voiceless method for the recognition of speech-based commands using video of lip movement







Human Computer Interface Technologies for the Motor Impaired

considers possible applications, discusses limitations, and presents the current research taking place in the field of HCI. Dedicated to enhancing the lives of people living with disabilities, this book aids professionals in biomedical, electronics, and computer engineering, and serves as a resource for anyone interested in the developing applications of HCI.
List of Figures
xv
List of Tables
xix
Preface xxi
Acknowledgment xxv
Authors xxvii
1 Introduction
1(8)
Abstract
1(1)
1.1 Introduction: Human--computer interface for people with disabilities
1(1)
1.2 Background
2(1)
1.3 History
3(2)
1.4 Future of HCI
5(1)
1.5 Layout of the book
6(3)
Reference
8(1)
2 Human--computer interface: Mechanical sensors
9(14)
Abstract
9(1)
2.1 Introduction
9(1)
2.2 Modified devices
10(4)
2.2.1 Mouse or joystick
10(1)
2.2.2 Tracking ball
10(1)
2.2.3 Modified computer display or digital tablet
11(1)
2.2.4 Special purpose interface devices
12(1)
2.2.4.1 Head movement
12(1)
2.2.4.2 Blow and suck
13(1)
2.2.4.3 Smart glove
13(1)
2.3 Sensors
14(5)
2.3.1 Inertial measurement unit
14(1)
2.3.2 Angle sensor
15(1)
2.3.3 Stretch sensor
16(1)
2.3.4 Force sensor
17(1)
2.3.4.1 Example of smart glove
18(1)
2.3.4.2 Computer connectivity specifics
18(1)
2.4 Applications of HCI based on mechanical sensors
19(1)
2.4.1 Shortcomings of HCI based on mechanical sensors
20(1)
2.5 Current research and future improvements
20(3)
References
20(3)
3 Brain--computer interface based on thought waves
23(20)
Abstract
23(1)
3.1 Introduction
24(1)
3.2 History of brain--computer interface
25(3)
3.3 Significance of BCI devices
28(1)
3.4 BCI technology
29(2)
3.4.1 Invasive BCI
29(1)
3.4.2 Motor-invasive BCI
29(1)
3.4.3 Noninvasive BCI
30(1)
3.5 System design
31(2)
3.5.1 Invasive thought translation device BCI
32(1)
3.5.2 Invasive sensory BCI
32(1)
3.5.3 Noninvasive thought translation device BCI
32(1)
3.6 Signal analysis
33(1)
3.6.1 Improving signal quality
33(1)
3.6.2 EEG feature selection
34(1)
3.7 BCI translation algorithms
34(1)
3.8 User consideration
35(1)
3.9 Applications of BCI
35(2)
3.9.1 Applications of invasive TTD
36(1)
3.9.2 Applications of noninvasive TTD
36(1)
3.10 Limitations
37(1)
3.11 Future research
38(1)
3.12 Ethical consideration
39(4)
References
40(3)
4 Evoked potentials-based brain--computer interface
43(14)
Abstract
43(1)
4.1 Introduction
43(2)
4.2 Brain--computer interface (BCI) systems based on steady-state visual evoke potential
45(5)
4.2.1 EEG headset
46(1)
4.2.2 Display/LED panel
46(1)
4.2.3 Main controller/interface board
47(1)
4.2.4 Processing unit
48(2)
4.3 Design challenges and limitations
50(1)
4.4 Results
51(2)
4.5 User benefits and improvements
53(4)
References
54(3)
5 Myoelectric-based hand gesture recognition for human--computer interface applications
57(20)
Abstract
57(1)
5.1 Introduction
57(1)
5.2 Background
58(2)
5.2.1 Identification of individual finger actions
58(2)
5.2.2 Identification of hand grips
60(1)
5.3 Current technologies and implementation
60(17)
5.3.1 Individual finger movements
60(2)
5.3.1.1 Techniques to analyze the data
62(2)
5.3.1.2 Results
64(1)
5.3.1.3 Discussion
64(3)
5.3.2 Hand/finger grip movements
67(2)
5.3.2.1 Techniques to identify grip patterns
69(5)
References
74(3)
6 Video-based hand movement for human--computer interface
77(18)
Abstract
77(1)
6.1 Introduction
77(4)
6.1.1 Hand-action recognition using marker-based video
78(1)
6.1.2 Hand-action recognition using marker-less video approach
79(2)
6.2 Background
81(6)
6.2.1 Motion image estimation
81(1)
6.2.2 Wavelet transforms
82(1)
6.2.2.1 Wavelet
82(1)
6.2.2.2 Discrete wavelet transform (DWT)
83(1)
6.2.2.3 Discrete stationary wavelet transform
84(1)
6.2.3 Features of temporal history template (THT)
84(2)
6.2.3.1 Feature classification
86(1)
6.2.3.2 Feature distance
86(1)
6.3 Data analysis
87(1)
6.4 Discussion
87(2)
6.5 User requirements
89(1)
6.6 User benefits
89(1)
6.7 Shortcomings
90(1)
6.8 Future developments
90(5)
References
91(4)
7 Human--computer interface based on electrooculography
95(22)
Abstract
95(1)
7.1 Introduction
95(1)
7.2 Background
96(5)
7.2.1 Eye movement
96(2)
7.2.2 Electrooculography (EOG)
98(1)
7.2.3 Human eye anatomy: Movement
99(2)
7.3 Current technologies: Historical to state of the art
101(3)
7.3.1 System requirements
103(1)
7.4 Example of EOG-based system
104(5)
7.4.1 Introduction
104(1)
7.4.2 System description
104(2)
7.4.3 Experimental protocol
106(1)
7.4.4 Experimental procedure
107(2)
7.5 Results
109(1)
7.6 Discussion
110(2)
7.7 Limitations of the study
112(1)
7.8 Discussion: User benefits and limitations
113(4)
References
115(1)
Further reading
116(1)
8 Video-based eye tracking
117(18)
Abstract
117(1)
8.1 Introduction
117(2)
8.2 Background and history
119(3)
8.3 An example eye-tracking method
122(4)
8.3.1 Hough transform
122(1)
8.3.2 Otsu's method
123(1)
8.3.3 Design algorithm
124(1)
8.3.4 Problems with eye-tracking technique
125(1)
8.4 Data analysis
126(2)
8.4.1 Distance detection
126(2)
8.5 Results
128(1)
8.6 Discussion: User benefits and limitations
129(6)
References
132(3)
9 Speech for controlling computers
135(26)
Abstract
135(1)
9.1 Introduction
135(1)
9.2 History of speech-based machine commands
136(1)
9.3 Automatic speech recognition (ASR)
137(1)
9.3.1 Digitization of sound signal
137(1)
9.4 Speech denoising methods
138(4)
9.4.1 Spectral-based filtering
139(1)
9.4.2 Noise profile
139(1)
9.4.3 Low-pass filter
139(1)
9.4.4 Anti-aliasing filter
140(1)
9.4.5 High-pass filter
140(1)
9.4.6 Band-pass filter
140(1)
9.4.7 Notch filter
141(1)
9.4.8 Mean and median filter
141(1)
9.4.9 Adaptive filter
141(1)
9.4.10 Kalman filter
142(1)
9.5 Speech analysis fundamentals
142(1)
9.6 Subsections of speech: Phonemes
143(1)
9.6.1 Vowels
144(1)
9.6.2 Consonants
144(1)
9.7 How people speak: Speech production model
144(2)
9.8 Place principle hearing model
146(1)
9.9 Features selection for speech analysis
147(3)
9.9.1 Power spectral analysis
148(1)
9.9.2 Linear predictive coding (LPC)
148(1)
9.9.3 Cepstral analysis
148(1)
9.9.4 Basic principle
149(1)
9.10 Speech feature classification
150(1)
9.11 Artificial neural networks
151(3)
9.11.1 Support vector machine
152(1)
9.11.2 Hidden Markov model
152(1)
9.11.2.1 User benefits
153(1)
9.12 Limitations in current systems
154(7)
9.12.1 Recent developments
155(1)
9.12.1.1 Speaker source separation
155(3)
9.12.1.2 Audio and visual fusion
158(1)
References
159(2)
10 Lip movement for human--computer interface
161(16)
Abstract
161(1)
10.1 Introduction: History and applications
161(2)
10.2 Current technologies
163(6)
10.2.1 Video-based speech analyzer
163(1)
10.2.1.1 Video processing to segment facial movement
164(1)
10.2.1.2 Issues related to the facial movement segmentation
165(1)
10.2.1.3 Extraction of visual speech features
166(2)
10.2.2 Speech recognition based on facial muscle activity
168(1)
10.2.2.1 Face movement related to speech
168(1)
10.2.2.2 Features of SEMG
169(1)
10.3 User requirements
169(1)
10.3.1 Video-based voiceless speech recognition
169(1)
10.3.2 EMG-based voiceless speech recognition
170(1)
10.4 Example of voiceless speech recognition systems
170(5)
10.4.1 Video data acquisition and processing
170(1)
10.4.2 Visual speech recognizer
171(2)
10.4.3 Experiments using facial muscle activity signals
173(1)
10.4.3.1 Facial EMG recording and preprocessing
174(1)
10.4.3.2 Data analysis
174(1)
10.4.3.3 Classification of visual and facial EMG features
174(1)
10.5 Discussion: User benefits
175(1)
10.6 Summary
176(1)
References 177(4)
Index 181
Dinesh K. Kumar received a B.Tech from IIT Madras, and a Ph.D in biomedical engineering from IIT Delhi and AIIMS, Delhi. He is a professor and leader of biomedical engineering at RMIT University, Melbourne, Australia. Dr. Kumar has published more than 330 refereed papers in the field, and his interests include muscle control, affordable diagnostics, and humancomputer interface. He is editor of multiple journals, chairs a range of conferences related to biomedical engineering, and enjoys walking in nature in his spare time.

Sridhar Poosapadi Arjunan received a B.Eng in electronics and communication from the University of Madras, India; a M.Eng in communication systems from Madurai Kamaraj University, India; and a Ph.D in biomedical signal processing from RMIT University, Australia. He is currently a postdoctoral research fellow with Biosignals Lab at RMIT University. Dr. Poosapadi Arjunan is a recipient of the RMIT SECE Research Scholarship, CASS Australian Early Career Researcher Grant, and the Australia-India ECR Fellowship. His major research interests include biomedical signal processing, rehabilitation study, fractal theory, and humancomputer interface applications.