Muutke küpsiste eelistusi

E-raamat: Elements of Artificial Neural Networks

  • Formaat: 360 pages
  • Sari: Complex Adaptive Systems
  • Ilmumisaeg: 11-Oct-1996
  • Kirjastus: MIT Press
  • Keel: eng
  • ISBN-13: 9780262279628
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 109,20 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: 360 pages
  • Sari: Complex Adaptive Systems
  • Ilmumisaeg: 11-Oct-1996
  • Kirjastus: MIT Press
  • Keel: eng
  • ISBN-13: 9780262279628
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Elements of Artificial Neural Networks provides a clearly organized general introduction, focusing on a broad range of algorithms, for students and others who want to use neural networks rather than simply study them.The authors, who have been developing and team teaching the material in a one-semester course over the past six years, describe most of the basic neural network models (with several detailed solved examples) and discuss the rationale and advantages of the models, as well as their limitations. The approach is practical and open-minded and requires very little mathematical or technical background. Written from a computer science and statistics point of view, the text stresses links to contiguous fields and can easily serve as a first course for students in economics and management.The opening chapter sets the stage, presenting the basic concepts in a clear and objective way and tackling important -- yet rarely addressed -- questions related to the use of neural networks in practical situations. Subsequent chapters on supervised learning (single layer and multilayer networks), unsupervised learning, and associative models are structured around classes of problems to which networks can be applied. Applications are discussed along with the algorithms. A separate chapter takes up optimization methods.The most frequently used algorithms, such as backpropagation, are introduced early on, right after perceptrons, so that these can form the basis for initiating course projects. Algorithms published as late as 1995 are also included. All of the algorithms are presented using block-structured pseudo-code, and exercises are provided throughout. Software implementing many commonly used neural network algorithms is available at the book's website.Transparency masters, including abbreviated text and figures for the entire book, are available for instructors using the text.

Preface xiii
Introduction
1(42)
History of Neural Networks
4(3)
Structure and Function of a Single Neuron
7(9)
Biological neurons
7(2)
Artificial neuron models
9(7)
Neural Net Architectures
16(6)
Fully connected networks
17(1)
Layered networks
18(1)
Acyclic networks
18(2)
Feedforward networks
20(1)
Modular neural networks
21(1)
Neural Learning
22(2)
Correlation learning
22(1)
Competitive learning
22(1)
Feedback-based weight adaptation
23(1)
What Can Neural Networks Be Used for?
24(11)
Classification
25(1)
Clustering
25(2)
Vector quantization
27(1)
Pattern association
27(2)
Function approximation
29(2)
Forecasting
31(1)
Control applications
32(2)
Optimization
34(1)
Search
34(1)
Evaluation of Networks
35(3)
Quality of results
36(1)
Generalizability
37(1)
Computational resources
38(1)
Implementation
38(1)
Conclusion
39(2)
Exercises
41(2)
Supervised Learning: Single-Layer Networks
43(22)
Perceptrons
43(2)
Linear Separability
45(1)
Perceptron Training Algorithm
46(6)
Termination criterion
50(1)
Choice of learning rate
50(1)
Non-numeric inputs
51(1)
Guarantee of Success
52(2)
Modifications
54(7)
Pocket algorithm
55(2)
Adalines
57(3)
Multiclass discrimination
60(1)
Conclusion
61(1)
Exercises
62(3)
Supervised Learning: Multilayer Networks I
65(46)
Multilevel Discrimination
66(1)
Preliminaries
67(3)
Architecture
67(1)
Objectives
68(2)
Backpropagation Algorithm
70(9)
Setting the Parameter Values
79(9)
Initialization of weights
79(1)
Frequency of weight updates
80(1)
Choice of learning rate
81(2)
Momentum
83(1)
Generalizability
84(1)
Number of hidden layers and nodes
85(1)
Number of samples
86(2)
Theoretical Results
88(5)
Cover's theorem
88(2)
Representations of functions
90(1)
Approximations of functions
91(2)
Accelerating the Learning Process
93(5)
Quickprop algorithm
93(1)
Conjugate gradient
94(4)
Applications
98(7)
Weaning from mechanically assisted ventilation
98(2)
Classification of myoelectric signals
100(1)
Forecasting commodity prices
101(2)
Controlling a gantry crane
103(2)
Conclusion
105(1)
Exercises
106(5)
Supervised Learning: Multilayer Networks II
111(46)
Madalines
111(5)
Adaptive Multilayer Networks
116(20)
Network pruning algorithms
116(2)
Marchand's algorithm
118(5)
Upstart algorithm
123(5)
Neural Tree
128(2)
Cascade correlation
130(3)
Tiling algorithm
133(3)
Prediction Networks
136(5)
Recurrent networks
136(3)
Feedforward networks for forecasting
139(2)
Radial Basis Functions
141(8)
Polynomial Networks
149(4)
Regularization
153(1)
Conclusion
154(1)
Exercises
155(2)
Unsupervised Learning
157(60)
Winner-Take-All Networks
161(12)
Hamming networks
161(2)
Maxnet
163(1)
Simple competitive learning
164(9)
Learning Vector Quantizers
173(3)
Counterpropagation Networks
176(4)
Adaptive Resonance Theory
180(7)
Topologically Organized Networks
187(14)
Self-organizing maps
188(7)
Convergence*
195(2)
Extensions
197(4)
Distance-Based Learning
201(3)
Maximum entropy
202(1)
Neural gas
202(2)
Neocognitron
204(4)
Principal Component Analysis Networks
208(5)
Conclusion
213(1)
Exercises
214(3)
Associative Models
217(50)
Non-iterative Procedures for Association
219(8)
Hopfield Networks
227(17)
Discrete Hopfield networks
228(9)
Storage capacity of Hopfield networks*
237(4)
Continuous Hopfield networks
241(3)
Brain-State-in-a-Box Network
244(5)
Boltzmann Machines
249(6)
Mean field annealing
254(1)
Hetero-associators
255(7)
Conclusion
262(1)
Exercises
263(4)
Optimization Methods
267(40)
Optimization using Hopfield Networks
269(10)
Traveling salesperson problem
270(5)
Solving simultaneous linear equations
275(1)
Allocating documents to multiprocessors
276(3)
Iterated Gradient Descent
279(1)
Simulated Annealing
280(5)
Random Search
285(2)
Evolutionary Computation
287(13)
Evolutionary algorithms
288(2)
Initialization
290(1)
Termination criterion
290(1)
Reproduction
291(2)
Operators
293(3)
Replacement
296(3)
Schema Theorem*
299(1)
Conclusion
300(2)
Exercises
302(5)
A Little Math 307(8)
Calculus
307(2)
Linear Algebra
309(1)
Statistics
310(5)
Data 315(16)
Iris Data
315(1)
Classification of Myoelectric Signals
316(2)
Gold Prices
318(1)
Clustering Animal Features
319(1)
3-D Corners, Grid and Approximation
319(4)
Eleven-City Traveling Salesperson Problem (Distances)
323(1)
Daily Stock Prices of Three Companies, over the Same Period
324(3)
Spiral Data
327(4)
Bibliography 331(8)
Index 339