Muutke küpsiste eelistusi

Hardware Accelerator Systems for Artificial Intelligence and Machine Learning, Volume 122 [Kõva köide]

Volume editor (Ministry of Skill Development and Entrepreneurship, New Delhi, India), Volume editor (School of Integrated Technology, Yonsei University, Seoul, Korea)
  • Formaat: Hardback, 416 pages, kõrgus x laius: 229x152 mm, kaal: 810 g
  • Sari: Advances in Computers
  • Ilmumisaeg: 07-Apr-2021
  • Kirjastus: Academic Press Inc
  • ISBN-10: 0128231238
  • ISBN-13: 9780128231234
Teised raamatud teemal:
  • Formaat: Hardback, 416 pages, kõrgus x laius: 229x152 mm, kaal: 810 g
  • Sari: Advances in Computers
  • Ilmumisaeg: 07-Apr-2021
  • Kirjastus: Academic Press Inc
  • ISBN-10: 0128231238
  • ISBN-13: 9780128231234
Teised raamatud teemal:
Hardware Accelerator Systems for Artificial Intelligence and Machine Learning, Volume 122 delves into arti cial Intelligence and the growth it has seen with the advent of Deep Neural Networks (DNNs) and Machine Learning. Updates in this release include chapters on Hardware accelerator systems for artificial intelligence and machine learning, Introduction to Hardware Accelerator Systems for Artificial Intelligence and Machine Learning, Deep Learning with GPUs, Edge Computing Optimization of Deep Learning Models for Specialized Tensor Processing Architectures, Architecture of NPU for DNN, Hardware Architecture for Convolutional Neural Network for Image Processing, FPGA based Neural Network Accelerators, and much more.
  • Updates on new information on the architecture of GPU, NPU and DNN
  • Discusses In-memory computing, Machine intelligence and Quantum computing
  • Includes sections on Hardware Accelerator Systems to improve processing efficiency and performance
Contributors ix
Preface xi
1 Introduction to hardware accelerator systems for artificial intelligence and machine learning
1(22)
Neha Gupta
1 Introduction to artificial intelligence and machine learning in hardware acceleration
2(2)
2 Deep learning and neural network acceleration
4(4)
3 HW accelerators for artificial neural networks and machine learning
8(5)
4 SW framework for deep neural networks
13(3)
5 Comparison of FPGA, CPU and GPU
16(3)
6 Conclusion and future scope
19(1)
References
19(2)
About the author
21(2)
2 Hardware accelerator systems for embedded systems
23(28)
William J. Song
1 Introduction
24(2)
2 Neural network computing in embedded systems
26(8)
3 Hardware acceleration in embedded systems
34(8)
4 Software frameworks for neural networks
42(2)
Acknowledgments
44(1)
References
44(5)
About the author
49(2)
3 Hardware accelerator systems for artificial intelligence and machine learning
51(46)
Hyunbin Park
Shiho Kim
1 Introduction
52(1)
2 Background
53(13)
3 Hardware inference accelerators for deep neural networks
66(12)
4 Hardware inference accelerators using digital neurons
78(10)
5 Summary
88(1)
Acknowledgments
89(1)
References
90(4)
About the authors
94(3)
4 Generic quantum hardware accelerators for conventional systems
97(38)
Parth Bir
1 Introduction
98(1)
2 Principles of computation
98(2)
3 Need and foundation for quantum hardware accelerator design
100(18)
4 A generic quantum hardware accelerator (GQHA)
118(7)
5 Industrially available quantum hardware accelerators
125(5)
6 Conclusion and future work
130(1)
References
130(3)
About the author
133(2)
5 FPGA based neural network accelerators
135(32)
Joo-Young Kim
1 Introduction
136(1)
2 Background
137(5)
3 Algorithmic optimization
142(5)
4 Accelerator architecture
147(7)
5 Design methodology
154(3)
6 Applications
157(1)
7 Evaluation
158(2)
8 Future research directions
160(1)
References
160(4)
About the author
164(3)
6 Deep learning with GPUs
167(50)
Won Jeon
Gun Ko
Jiwon Lee
Hyunwuk Lee
Dongho Ha
Won Woo Ro
1 Deep learning applications using GPU as accelerator
168(3)
2 Overview of graphics processing unit
171(10)
3 Deep learning acceleration in GPU hardware perspective
181(7)
4 GPU software for accelerating deep learning
188(8)
5 Advanced techniques for optimizing deep learning models on GPUs
196(11)
6 Cons and pros of GPU accelerators
207(1)
Acknowledgment
208(1)
References
209(4)
Further reading/References for advance
213(1)
About the authors
213(4)
7 Architecture of neural processing unit for deep neural networks
217(30)
Kyuho J. Lee
1 Introduction
218(1)
2 Background
219(3)
3 Considerations in hardware design
222(1)
4 NPU architectures
223(12)
5 Discussion
235(3)
6 Summary
238(1)
Acknowledgments
239(1)
References
239(4)
Further reading
243(2)
About the author
245(2)
8 Energy-efficient deep learning inference on edge devices
247(56)
Francesco Daghero
Daniele Jahier Pagliari
Massimo Poncino
1 Introduction
248(1)
2 Theoretical background
249(9)
3 Deep learning frameworks and libraries
258(1)
4 Advantages of deep learning on the edge
259(1)
5 Applications of deep learning at the edge
260(2)
6 Hardware support for deep learning inference at the edge
262(3)
7 Static optimizations for deep learning inference at the edge
265(17)
8 Dynamic (input-dependent) optimizations for deep learning inference at the edge
282(11)
9 Open challenges and future directions
293(1)
References
293(8)
About the authors
301(2)
9 "Last mile" optimization of edge computing ecosystem with deep learning models and specialized tensor processing architectures
303(40)
Yuri Gordienko
Yuriy Kochura
Vlad Taran
Nikita Gordienko
Oleksandr Rokovyi
Oleg Alienin
Sergii Stirenko
1 Introduction
304(2)
2 State of the art
306(5)
3 Methodology
311(8)
4 Results
319(11)
5 Discussion
330(2)
6 Conclusions
332(1)
Acknowledgments
333(1)
References
333(6)
Further reading
339(1)
About the authors
339(4)
10 Hardware accelerator for training with integer backpropagation and probabilistic weight update
343(24)
Hyunbin Park
Shiho Kim
1 Introduction
344(3)
2 Integer back propagation with probabilistic weight update
347(7)
3 Consideration of hardware implementation of the probabilistic weight update
354(2)
4 Simulation results of the proposed scheme
356(3)
5 Discussions
359(2)
6 Summary
361(1)
Acknowledgments
361(1)
References
362(2)
About the authors
364(3)
11 Music recommender system using restricted Boltzmann machine with implicit feedback
367(34)
Amitabh Biswal
Malaya Dutta Borah
Zakir Hussain
1 Introduction
368(3)
2 Types of recommender systems
371(15)
3 Problem statement
386(1)
4 Explanation of RBM
386(4)
5 Proposed architecture
390(5)
6 Minibatch size used for training and selection of weights and biases
395(1)
7 Types of activation function that can be used in this model
395(1)
8 Evaluation metrics that can be used to measure for music recommendation
396(1)
9 Experimental setup
397(1)
10 Result
398(1)
11 Conclusion
399(1)
12 Future works
399(1)
Reference
399(2)
About the authors 401
Shiho Kim is a professor in the school of integrated technology at Yonsei University, Seoul, Korea. His previous assignment includes, System on chip design engineer, at LG Semicon Ltd. (currently SK Hynix), Korea, Seoul [ 1995-1996], Director of RAVERS (Research center for Advanced Hybrid Electric Vehicle Energy Recovery System, a government-supported IT research center. Associate Director of the ICT consilience program, which is a Korea National program for cultivating talented engineers in the field of information and communication Technology, Korea [ 2011-2012], Director of Seamless Transportation Lab, at Yonsei university, Korea [ since 2011-]. His main research interest includes Development of software and hardware technologies for intelligent vehicles, Blockchain technology for intelligent transportation systems, and reinforcement learning for autonomous vehicles. He is the member of the editorial board and reviewer for various Journals and International conferences. So far he has organized 2 International Conference as Technical Chair/General Chair. He is a member of IEIE (Institute of Electronics and Information Engineers of Korea), KSAE (Korean Society of Automotive Engineers), vice president of KINGC (Korean Institute of Next Generation Computing), and a senior member of IEEE. He is the co-author for over 100 papers and holding more than 50 patents in the area of information and communication technology. Ganesh Chandra Deka is currently Deputy Director (Training) at Directorate General of Training, Ministry of Skill Development and Entrepreneurship, Government of India, New Delhi-110001, India. His research interests include e-Governance, Big Data Analytics, NoSQL Databases and Vocational Education and Training.

He has 2 books on Cloud Computing published by LAP Lambert, Germany. He is the Co-author for 4 text books on Fundamentals of Computer Science (3 books published by Moni Manik Prakashan, Guwahati, Assam, India and 1 IGI Global, USA). As of now he has edited 14 books (6 IGI Global, USA, 5 CRC Press, USA, 2 Elsevier & 1 Springer) on Big data, NoSQL and Cloud Computing and authored 10 Book Chapters.

He has published around 47 research papers in various IEEE conferences. He has organized 08 IEEE International Conferences as Technical Chair in India. He is the Member of the editorial board and reviewer for various Journals and International conferences. Member of IEEE, the Institution of Electronics and Telecommunication Engineers, India and Associate Member, the Institution of Engineers, India