Muutke küpsiste eelistusi

E-raamat: Real-Time Multi-Chip Neural Network for Cognitive Systems

Edited by (Delft University of Technology, The Netherlands), Edited by (Delft University of Technology, The Netherlands)
  • Formaat - PDF+DRM
  • Hind: 114,40 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Simulation of brain neurons in real-time using biophysically-meaningful models is a pre-requisite for comprehensive understanding of how neurons process information and communicate with each other, in effect efficiently complementing in-vivo experiments. In spiking neural networks (SNNs), propagated information is not just encoded by the firing rate of each neuron in the network, as in artificial neural networks (ANNs), but, in addition, by amplitude, spike-train patterns, and the transfer rate. The high level of realism of SNNs and more significant computational and analytic capabilities in comparison with ANNs, however, limit the size of the realized networks. Consequently, the main challenge in building complex and biophysically-accurate SNNs is largely posed by the high computational and data transfer demands.

Real-Time Multi-Chip Neural Network for Cognitive Systems presents novel real-time, reconfigurable, multi-chip SNN system architecture based on localized communication, which effectively reduces the communication cost to a linear growth. The system use double floating-point arithmetic for the most biologically accurate cell behavior simulation, and is flexible enough to offer an easy implementation of various neuron network topologies, cell communication schemes, as well as models and kinds of cells. The system offers a high run-time configurability, which reduces the need for resynthesizing the system. In addition, the simulator features configurable on- and off-chip communication latencies as well as neuron calculation latencies. All parts of the system are generated automatically based on the neuron interconnection scheme in use. The simulator allows exploration of different system configurations, e.g. the interconnection scheme between the neurons, the intracellular concentration of different chemical compounds (ions), which affect how action potentials are initiated and propagate.
Preface xv
List of Contributors xvii
List of Figures xix
List of Tables xxix
List of Abbreviations xxxi
1 Introduction 1(22)
Amir Zjajo
Rene van Leuken
1.1 A Real-Time Reconfigurable Multi-Chip Architecture for Large-Scale Biophysically Accurate Neuron Simulation
1(3)
1.2 The Inferior Olivary Nucleus Cell
4(6)
1.2.1 Abstract Model Description
4(2)
1.2.2 The ION Cell Design Configuration
6(3)
1.2.3 The ION Cell Cluster Controller
9(1)
1.3 Multi-Chip Dataflow Architecture
10(7)
1.4 Organization of the Book
17(2)
References
19(4)
2 Multi-Chip Dataflow Architecture for Massive Scale Biophysically Accurate Neuron Simulation 23(26)
Jaco Hofmann
2.1 Introduction
24(1)
2.2 System Design Configuration
25(11)
2.2.1 Requirements
25(1)
2.2.2 Zero Communication Time: The Optimal Approach
26(1)
2.2.3 Localising Communication: How to Speed Up the Common Case
26(1)
2.2.4 Network-on-Chips
27(1)
2.2.5 Localise Communication between Clusters
28(3)
2.2.6 Synchronisation between the Clusters
31(1)
2.2.7 Adjustments to the Network to Scale over Multiple FPGAs
32(1)
2.2.8 Interfacing the Outside World: Inputs and Outputs
33(1)
2.2.9 Adding Flexibility: Run-Time Configuration
34(1)
2.2.10 Parameters of the System
35(1)
2.2.11 Connectivity and Structure Generation
35(1)
2.3 System Implementation
36(5)
2.3.1 Exploiting Locality: Clusters
36(2)
2.3.2 Connecting Clusters: Routers
38(1)
2.3.3 Tracking Time: Iteration Controller
39(1)
2.3.4 Inputs and Outputs
39(1)
2.3.5 The Control Bus for Run-Time Configuration
40(1)
2.3.6 Automatic Structure Generation and Connectivity Generation
41(1)
2.4 Experimental Results
41(5)
2.5 Conclusions
46(1)
References
46(3)
3 A Real-Time Hybrid Neuron Network for Highly Parallel Cognitive Systems 49(32)
Jan Christiaanse
3.1 Introduction
49(2)
3.2 The Calculation Architecture
51(12)
3.2.1 The Physical Cell Overview
52(1)
3.2.2 Initialising the Physical Cells
53(1)
3.2.3 Axon Hillock + Soma Hardware
53(4)
3.2.3.1 Exponent operand schedule
54(1)
3.2.3.2 Axon hillock and soma compartment controller
55(2)
3.2.4 Dendrite Hardware
57(4)
3.2.4.1 Dendrite network operation
58(1)
3.2.4.2 Dendrite combine operation
59(1)
3.2.4.3 Dendrite compartmental latency
60(1)
3.2.5 Calculation Architecture Latency
61(1)
3.2.6 Exponent Architecture
62(1)
3.3 The Calculation Architecture
63(5)
3.3.1 Communication Architecture Overview
63(1)
3.3.2 Cluster Controller
64(2)
3.3.3 Routing Network
66(2)
3.3.3.1 Routing method
66(1)
3.3.3.2 Design specification
67(1)
3.3.4 Interface Bridge
68(1)
3.4 Experimental Results
68(9)
3.4.1 Evaluation Method
68(3)
3.4.1.1 Building a test set
68(1)
3.4.1.2 Design simulation
69(1)
3.4.1.3 SystemC synthesis
70(1)
3.4.1.4 Post-synthesis simulation
70(1)
3.4.1.5 VHDL implementation
70(1)
3.4.2 Evaluation Results
71(2)
3.4.2.1 Accuracy results
71(1)
3.4.2.2 Latency results
71(1)
3.4.2.3 Resource usage
72(1)
3.4.3 Model Configuration
73(4)
3.5 Conclusions
77(1)
References
77(4)
4 Digital Neuron Cells for Highly Parallel Cognitive Systems 81(30)
Haipeng Lin
4.1 Introduction
81(2)
4.2 System Design Configuration
83(6)
4.2.1 Requirements
83(1)
4.2.2 Input and Output
84(1)
4.2.3 Parameters
85(1)
4.2.4 Scalability of Network
85(1)
4.2.5 Neuron Models Implementations
86(2)
4.2.6 Synthesis
88(1)
4.3 System Design Implementation
89(11)
4.3.1 Interface
89(2)
4.3.1.1 Inputs and outputs
89(1)
4.3.1.2 Locality of data
90(1)
4.3.1.2.1 Localization of inputs
90(1)
4.3.1.2.2 Localization of outputs
90(1)
4.3.2 Implementation of the Neuron Models
91(6)
4.3.2.1 The extended Hodgkin-Huxley model
91(1)
4.3.2.1.1 Neuron cell
91(1)
4.3.2.1.2 Physical cell
92(1)
4.3.2.1.3 Cluster
92(1)
4.3.2.2 Integrate-and-fire model
92(2)
4.3.2.3 Izhikevich model
94(1)
4.3.2.3.1 Axonal conduction delay
94(1)
4.3.2.3.2 STDP
96(1)
4.3.2.3.3 Spike generation
97(1)
4.3.3 High-level Synthesis
97(3)
4.3.3.1 Optimization with directives
97(1)
4.3.3.2 Adjustments of system for HLS
98(1)
4.3.3.2.1 Hodgkin-Huxley model
98(1)
4.3.3.2.2 Integrate-and-fire model
99(1)
4.3.3.2.3 Izhikevich model
99(1)
4.4 Performance Evaluation
100(5)
4.4.1 Model Configuration
100(1)
4.4.2 Experimental Results
101(4)
4.5 Conclusions
105(1)
References
106(5)
5 Energy-Efficient Multipath Ring Network for Heterogeneous Clustered Neuronal Arrays 111(32)
Andrei Ardelean
5.1 Introduction
111(1)
5.2 State-of-the-Art and Background Concepts
112(7)
5.2.1 Neuron Models
112(2)
5.2.2 Simulation Platforms
114(2)
5.2.3 Communication Network Considerations
116(3)
5.3 Neural Network Communication Schemes and System Structure
119(12)
5.3.1 Physical System Structure
119(4)
5.3.2 Extraction, Insertion, and Configuration Layer
123(1)
5.3.3 Topological Layer
124(7)
5.3.3.1 Multipath ring routing scheme
126(2)
5.3.3.2 Traffic model
128(3)
5.4 Energy-Delay Product
131(7)
5.4.1 Mathematical Derivation
132(3)
5.4.2 Energy-Delay Product Estimation
135(3)
5.5 Conclusions
138(1)
References
138(5)
6 A Hierarchical Dataflow Architecture for Large-Scale Multi-FPGA Biophysically Accurate Neuron Simulation 143(20)
He Zhang
6.1 Introduction
143(1)
6.2 The System Overview
144(5)
6.2.1 Mesh Topology
144(2)
6.2.2 The Routers
146(2)
6.2.3 The Clusters
148(1)
6.2.4 Hodgkin-Huxley Cells
148(1)
6.3 The Communication Architecture
149(6)
6.4 Experimental Results
155(5)
6.5 Conclusions
160(1)
References
160(3)
7 Single-Lead Neuromorphic ECG Classification System 163(26)
Eralp Kolagasioglu
7.1 Introduction
163(8)
7.1.1 ECG Signals and Arrhythmia
164(2)
7.1.2 Feature Detection
166(2)
7.1.2.1 Methods and algorithms
166(1)
7.1.2.1.1 QRS detection
166(1)
7.1.2.1.2 P and T wave detection
167(1)
7.1.3 Feature Selection
168(3)
7.1.3.1 Feature selection choices
170(1)
7.1.3.2 Methods and algorithms
170(1)
7.1.4 Classification Methods
171(1)
7.2 Feature Extraction Implementation
171(9)
7.2.1 Feature Detection
171(6)
7.2.1.1 QRS detection
171(2)
7.2.1.2 P and T wave detection
173(4)
7.2.2 Feature Selection
177(3)
7.2.2.1 Feature set
177(1)
7.2.2.2 Correlation matrix
178(2)
7.3 Network Configuration and Results
180(5)
7.3.1 Approach
180(1)
7.3.2 Silhouette Coefficients
181(1)
7.3.3 Clustering Methods for the Output
182(1)
7.3.4 Results
183(2)
7.4 Conclusion
185(1)
References
185(4)
8 Multi-Compartment Synaptic Circuit in Neuromorphic Structures 189(34)
Xuefei You
8.1 Introduction
189(4)
8.1.1 Synapse
189(4)
8.1.1.1 Synaptic plasticity
190(1)
8.1.1.2 Synaptic receptors
191(1)
8.1.1.2.1 AMPA receptor
191(1)
8.1.1.2.2 NMDA receptor
192(1)
8.1.1.2.3 GABA receptor
192(1)
8.2 Model Extraction
193(4)
8.2.1 Model of the Synapse
193(1)
8.2.2 Learning Rules
194(3)
8.2.2.1 Pair-based STDP
194(1)
8.2.2.1.1 Triplet-based STDP
196(1)
8.3 Component Implementations
197(6)
8.3.1 Learning Rule 1: Classic STDP
197(1)
8.3.2 Learning Rule 2: Advanced STDP
198(2)
8.3.3 Learning Rule 3: Triplet-Based STDP
200(1)
8.3.4 Synaptic Receptors
201(2)
8.3.4.1 AMPA receptor
201(1)
8.3.4.2 NMDA receptor
202(1)
8.3.4.3 GABA receptors
203(1)
8.4 Component Characterizations
203(10)
8.4.1 Learning Rule 1: Classic STDP
203(1)
8.4.2 Learning Rule 2: Advanced STDP
204(2)
8.4.3 Learning Rule 3: Triplet-based STDP
206(1)
8.4.4 Synaptic Receptors
207(6)
8.4.4.1 Environment settings
209(2)
8.4.4.2 Results
211(2)
8.5 Neural Network with Multi-Receptor Synapses
213(6)
8.5.1 Synchrony Detection Tool: Cross-Correlograms
213(1)
8.5.2 Environment Settings
214(2)
8.5.3 Input Patterns
216(1)
8.5.4 Synchrony Detection
217(2)
8.6 Conclusions
219(1)
References
220(3)
9 Conclusion and Future Work 223(6)
Amir Zjajo
Rene van Leuken
9.1 Summary of the Results
223(4)
9.2 Recommendations and Future Work
227(2)
Index 229(2)
About the Editors 231
Amir Zjajo, Delft University of Technology, The Netherlands

Rene van Leuken, Delft University of Technology, The Netherlands