Muutke küpsiste eelistusi

Hebbian Learning and Negative Feedback Networks 2005 ed. [Kõva köide]

  • Formaat: Hardback, 383 pages, kõrgus x laius: 235x155 mm, kaal: 1640 g, XVIII, 383 p., 1 Hardback
  • Sari: Advanced Information and Knowledge Processing
  • Ilmumisaeg: 05-Jan-2005
  • Kirjastus: Springer London Ltd
  • ISBN-10: 1852338830
  • ISBN-13: 9781852338831
Teised raamatud teemal:
  • Kõva köide
  • Hind: 141,35 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 166,29 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Hardback, 383 pages, kõrgus x laius: 235x155 mm, kaal: 1640 g, XVIII, 383 p., 1 Hardback
  • Sari: Advanced Information and Knowledge Processing
  • Ilmumisaeg: 05-Jan-2005
  • Kirjastus: Springer London Ltd
  • ISBN-10: 1852338830
  • ISBN-13: 9781852338831
Teised raamatud teemal:
A state of the art specialist monograph on artificial neural networks which use Hebbian learning, covering a wide range of real experiments and which displays how it's approaches can be applied to analyse real problems. The book has a thorough approach and brings together a wide range of concepts into a coherent whole. Colin Fyfe writes with authority, and is a well-known, experienced researcher who has led a team working in this area at Paisley.

The central idea of Hebbian Learning and Negative Feedback Networks is that artificial neural networks using negative feedback of activation can use simple Hebbian learning to self-organise so that they uncover interesting structures in data sets. Two variants are considered: the first uses a single stream of data to self-organise. By changing the learning rules for the network, it is shown how to perform Principal Component Analysis, Exploratory Projection Pursuit, Independent Component Analysis, Factor Analysis and a variety of topology preserving mappings for such data sets. The second variants use two input data streams on which they self-organise. In their basic form, these networks are shown to perform Canonical Correlation Analysis, the statistical technique which finds those filters onto which projections of the two data streams have greatest correlation. The book encompasses a wide range of real experiments and displays how the approaches it formulates can be applied to the analysis of real problems.

Arvustused

From the reviews of the first edition:









"This book is concerned with developing unsupervised learning procedures and building self organizing network modules that can capture regularities of the environment. the book provides a detailed introduction to Hebbian learning and negative feedback neural networks and is suitable for self-study or instruction in an introductory course." (Nicolae S. Mera, Zentralblatt MATH, Vol. 1069, 2005)

Introduction
1(10)
Artificial Neural Networks
2(1)
The Organisation of this Book
3(8)
Part I Single Stream Networks
Background
11(20)
Hebbian Learning
11(2)
Quantification of Information
13(3)
Entropy and the Gaussian Distribution
14(2)
Principal Component Analysis
16(2)
Weight Decay in Hebbian Learning
18(3)
Principal Components and Weight Decay
19(2)
ANNs and PCA
21(2)
Oja's One Neuron Model
21(1)
Oja's Subspace Algorithm
22(1)
Oja's Weighted Subspace Algorithm
22(1)
Sanger's Generalized Hebbian Algorithm
23(1)
Anti-Hebbian Learning
23(2)
Independent Component Analysis
25(4)
A Restatement of the Problem
26(2)
A Neural Model for ICA
28(1)
Conclusion
29(2)
The Negative Feedback Network
31(26)
Introduction
31(10)
Equivalence to Oja's Subspace Algorithm
31(3)
Algorithm for PCA
34(1)
Plasticity and Continuity
35(1)
Speed of Learning and Information Content
36(1)
Analysis of Convergence
37(4)
The VW Model
41(5)
Properties of the VW Network
42(1)
Theoretical Discussion
43(3)
Using Distance Differences
46(2)
Equivalence to Sanger's Algorithm
47(1)
Minor Components Analysis
48(6)
Regression
49(1)
Use of Minor Components Analysis
50(2)
Robustness of Regession Solutions
52(1)
Application to ICA
53(1)
Conclusion
54(3)
Peer-Inhibitory Neurons
57(28)
Analysis of Differential Learning Rates
59(11)
The GW Anomaly
68(1)
Simulations
69(1)
Differential Activation Functions
70(12)
Model 1: Lateral Activation Functions
70(3)
Model 2: Lateral and Feedforward Activation Functions
73(2)
Model 3: Feedforward Activation Functions
75(2)
Summary
77(3)
Other Models
80(2)
Emergent Properties of the Peer-Inhibition Network
82(1)
Conclusion
83(2)
Multiple Cause Data
85(26)
A Typical Experiment
87(1)
Non-negative Weights
88(4)
Other Data Sets
89(2)
Theoretical Analysis
91(1)
Conclusion
92(1)
Factor Analysis
92(16)
Principal Factor Analysis
93(1)
The Varimax Rotation
94(1)
Relation to Non-negativity
94(1)
The Bars Data
95(1)
Continuous Data
96(1)
Generalised PCA
97(2)
Non-negative Outputs
99(1)
Additive Noise
100(1)
Dimensionality of the Output Space
101(1)
The Minimum Overcomplete Basis
102(2)
Simulations
104(4)
Conclusion
108(3)
Exploratory Data Analysis
111(26)
Exploratory Projection Pursuit
112(1)
Interesting Directions
112(1)
The Data and Sphering
113(1)
The Projection Pursuit Network
114(9)
Extending PCA
114(2)
The Projection Pursuit Indices
116(1)
Principal Component Analysis
117(1)
Convergence of the Algorithm
117(2)
Experimental Results
119(2)
Using Hyperbolic Functions
121(1)
Simulations
122(1)
Other Indices
123(4)
Indices Based on Information Theory
123(2)
Friedman's Index
125(2)
Intrator's Index
127(1)
Using Exploratory Projection Pursuit
127(6)
Hierarchical Exploratory Projection Pursuit
128(3)
World Bank Data
131(2)
Independent Component Analysis
133(3)
Conclusion
136(1)
Topology Preserving Maps
137(32)
Background
137(3)
Competitive Learning
138(1)
The Kohonen Feature Map
138(2)
The Classification Network
140(3)
Results
141(2)
Stochastic Neurons
143(1)
The Scale Invariant Map
143(6)
An Example
144(1)
Comparison with Kohonen Feature Maps
145(1)
Discussion
145(3)
Self-Organisation on Voice Data
148(1)
Vowel Classification
148(1)
The Subspace Map
149(9)
Summary of the Training Algorithm
151(2)
Training and Results
153(4)
Discussion
157(1)
The Negative Feedback Coding Network
158(9)
Results
159(1)
Statistics and Weights
160(2)
Reconstruction Error
162(1)
Topology Preservation
163(1)
Approximate Topological Equivalence
164(1)
A Hierarchical Feature Map
165(1)
A Biological Implementation
166(1)
Conclusion
167(2)
Maximum Likelihood Hebbian Learning
169(22)
The Negative Feedback Network and Cost Functions
169(2)
Is This a Hebbian Rule?
171(1)
Insensitive Hebbian Learning
171(5)
Principal Component Analysis
171(3)
Anti-Hebbian Learning
174(1)
Other Negative Feedback Networks
175(1)
The Maximum Likelihood EPP Algorithm
176(5)
Minimum Likelihood Hebbian Learning
177(1)
Experimental Results
178(2)
Skewness
180(1)
A Combined Algorithm
181(5)
Astronomical Data
182(1)
Wine
182(1)
Independent Component Analysis
183(3)
Conclusion
186(5)
Part II Dual Stream Networks
Two Neural Networks for Canonical Correlation Analysis
191(18)
Statistical Canonical Correlation Analysis
191(1)
The First Canonical Correlation Network
192(2)
Experimental Results
194(8)
Artificial Data
195(1)
Real Data
195(1)
Random Dot Stereograms
196(2)
Equal Correlations
198(1)
More Than Two Data Sets
199(1)
Many Correlations
200(2)
A Second Neural Implementation of CCA
202(2)
Simulations
204(2)
Artificial Data
204(1)
Real Data
205(1)
Linear Discriminant Analysis
206(1)
Discussion
207(2)
Alternative Derivations of CCA Networks
209(8)
A Probabilistic Perspective
209(2)
Putting Priors on the Probabilities
210(1)
Robust CCA
211(1)
A Model Derived from Becker's Model 1
212(3)
Who Is Telling the Truth?
213(1)
A Model Derived from Becker's Second Model
214(1)
Discussion
215(2)
Kernel and Nonlinear Correlations
217(30)
Nonlinear Correlations
217(4)
Experiment Results
217(4)
The Search for Independence
221(4)
Using Minimum Correlation to Extract Independent Sources
222(1)
Experiments
223(1)
Forcasting
223(2)
Kernel Canonical Correlation Analysis
225(9)
Kernel Principal Correlation Analysis
226(1)
Kernel Canonical Correlation Analysis
227(3)
Simulations
230(1)
ICA using KCCA
231(3)
Relevance Vector Regression
234(3)
Application to CCA
237(1)
Appearance-Based Object Recognition
237(3)
Mixtures of Linear Correlations
240(7)
Many Locally Linear Correlations
240(1)
Stone's Data
241(5)
Discussion
246(1)
Exploratory Correlation Analysis
247(28)
Exploratory Correlation Analysis
247(4)
Experiments
251(2)
Artificial Data
251(1)
Dual Stream Blind Source Separation
252(1)
Connection to CCA
253(1)
FastECA
254(2)
FastECA for Several Units
255(1)
Comparison of ECA and FastECA
256(1)
Local Filter Formation From Natural Stereo Images
256(10)
Biological Vision
256(3)
Sparse Coding of Natural Images
259(1)
Stereo Experiments
260(6)
Twinned Maximum Likelihood Learning
266(4)
Unmixing of Sound Signals
270(1)
Conclusion
271(4)
Multicollinearity and Partial Least Squares
275(16)
The Ridge Model
276(1)
Application to CCA
276(4)
Relation to Partial Least Squares
279(1)
Extracting Multiple Canonical Correlations
280(1)
Experiments on Multicollinear Data
281(3)
Artificial Data
281(1)
Examination Data
281(1)
Children's Gait Data
281(3)
A Neural Implementation of Partial Least Squares
284(4)
Introducing Nonlinear Correlations
284(1)
Simulations
285(1)
Linear Neural PLS
285(1)
Mixtures of Linear Neural PLS
286(1)
Nonlinear Neural PLS Regression
287(1)
Conclusion
288(3)
Twinned Principal Curves
291(18)
Twinned Principal Curves
291(2)
Properties of Twinned Principal Curves
293(12)
Comparison with Single Principal Curves
293(2)
Illustrative Examples
295(2)
Intersecting Curves
297(2)
Termination Criteria: MSE
299(1)
Termination Criteria: Using Derivative Information
299(2)
Alternative Twinned Principal Curves
301(4)
Twinned Self-Organising Maps
305(2)
Predicting Student's Exam Marks
306(1)
Discussion
307(2)
The Future
309(6)
Review
309(3)
Omissions
312(1)
Current and Future Work
312(3)
A Negative Feedback Artificial Neural Networks
315(8)
A.1 The Interneuron Model
315(2)
A.2 Other Models
317(3)
A.2.1 Static Models
318(1)
A.2.2 Dynamic Models
319(1)
A.3 Related Biological Models
320(3)
B Previous Factor Analysis Models
323(18)
B.1 Foldiak's Sixth Model
323(3)
B.1.1 Implementation Details
324(1)
B.1.2 Results
325(1)
B.2 Competitive Hebbian Learning
326(1)
B.3 Multiple Cause Models
327(3)
B.3.1 Saund's Model
328(2)
B.3.2 Dayan and Zemel
330(1)
B.4 Predictability Minimisation
330(2)
B.5 Mixtures of Experts
332(2)
B.5.1 An Example
334(1)
B.6 Probabilistic Models
334(7)
B.6.1 Mixtures of Gaussians
335(2)
B.6.2 A Logistic Belief Network
337(1)
B.6.3 The Helmholtz Machine and the EM Algorithm
337(1)
B.6.4 The Wake-Sleep Algorithm
338(1)
B.6.5 Olshausen and Field's Sparse Coding Network
339(2)
C Related Models for ICA
341(12)
C.1 Jutten and Herault
341(3)
C.1.1 An Example Separation
342(1)
C.1.2 Learning the Weights
343(1)
C.2 Nonlinear PCA
344(1)
C.2.1 Simulations and Discussion
345(1)
C.3 Information Maximisation
345(5)
C.3.1 The Learning Algorithm
347(3)
C.4 Penalised Minimum Reconstruction Error
350(1)
C.4.1 Adding Competition
350(1)
C.5 Fast ICA
351(2)
C.5.1 FastICA for One Unit
352(1)
D Previous Dual Stream Approaches
353(10)
D.1 The I-Max Model
353(2)
D.2 Stone's Model
355(1)
D.3 Kay's Neural Models
356(2)
D.4 Borga's Algorithm
358(5)
E Data Sets
363(8)
E.1 Artificial Data Sets
363(3)
E.1.1 Gaussian Data
363(1)
E.1.2 Bars Data
363(1)
E.1.3 Sinusoids
364(1)
E.1.4 Random Dot Stereograms
365(1)
E.1.5 Nonlinear Manifolds
365(1)
E.2 Real Data Sets
366(5)
E.2.1 Wine Data
366(1)
E.2.2 Astronomical Data
366(1)
E.2.3 The Cetin Data Set
366(1)
E.2.4 Exam Data
366(1)
E.2.5 Children's Gait Data
367(1)
E.2.6 Speech Signals
367(1)
E.2.7 Bank Database
368(1)
E.2.8 World Bank Data
368(1)
E.2.9 Exchange Rate Data
368(1)
E.2.10 Power Load Data
368(1)
E.2.11 Image Data
368(3)
References 371(10)
Index 381