Muutke küpsiste eelistusi

Neural Networks and Statistical Learning Second Edition 2019 [Kõva köide]

  • Formaat: Hardback, 988 pages, kõrgus x laius: 235x155 mm, kaal: 1688 g, 70 Illustrations, color; 114 Illustrations, black and white; XXX, 988 p. 184 illus., 70 illus. in color., 1 Hardback
  • Ilmumisaeg: 25-Sep-2019
  • Kirjastus: Springer London Ltd
  • ISBN-10: 1447174518
  • ISBN-13: 9781447174516
  • Kõva köide
  • Hind: 132,08 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 155,39 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Hardback, 988 pages, kõrgus x laius: 235x155 mm, kaal: 1688 g, 70 Illustrations, color; 114 Illustrations, black and white; XXX, 988 p. 184 illus., 70 illus. in color., 1 Hardback
  • Ilmumisaeg: 25-Sep-2019
  • Kirjastus: Springer London Ltd
  • ISBN-10: 1447174518
  • ISBN-13: 9781447174516

This book provides a broad yet detailed introduction to neural networks and machine learning in a statistical framework. A single, comprehensive resource for study and further research, it explores the major popular neural network models and statistical learning approaches with examples and exercises and allows readers to gain a practical working understanding of the content. This updated new edition presents recently published results and includes six new chapters that correspond to the recent advances in computational learning theory, sparse coding, deep learning, big data and cloud computing.

Each chapter features state-of-the-art descriptions and significant research findings. The topics covered include:

•            multilayer perceptron;

•            associative memory;

•            clustering;

•            reinforcement learning;

•            probabilistic and Bayesian networks; 

•            fuzzy sets and logic;

 Focusing on the prominent accomplishments and their practical aspects, this book provides academic and technical staff, as well as graduate students and researchers with a solid foundation and comprehensive reference on the fields of neural networks, pattern recognition, signal processing, and machine learning.



Inclusive coverage of all the essential neural network applications in a statistical learning framework makes this a baseline text for students and researchers, with 25 chapters on all the major approaches that include a wealth of examples and exercises.

Arvustused

Neural Networks and Statistical Learning by Ke-Lin Du and M. N. S. Swamy can be seen as a central reference point for the mathematical understanding and implementation of the core ideas of neuronal networks and statistical learning techniques. (Jan Pablo Burgard, SIAM Review, Vol. 62 (4), 2020)

1 Introduction
1(20)
1.1 Major Events in Machine Learning Research
1(3)
1.2 Neurons
4(5)
1.2.1 McCulloch-Pitts Neuron Model
5(2)
1.2.2 Spiking Neuron Models
7(2)
1.3 Neural Networks
9(4)
1.4 Neural Network Processors
13(3)
1.5 Scope of the Book
16(1)
References
17(4)
2 Fundamentals of Machine Learning
21(44)
2.1 Learning and Inference Methods
21(12)
2.1.1 Scientific Reasoning
22(2)
2.1.2 Supervised, Unsupervised, and Reinforcement Learnings
24(3)
2.1.3 Semi-supervised Learning and Active Learning
27(1)
2.1.4 Other Learning Methods
28(5)
2.2 Learning and Generalization
33(7)
2.2.1 Generalization Error
34(1)
2.2.2 Generalization by Stopping Criterion
35(1)
2.2.3 Generalization by Regularization
36(1)
2.2.4 Dropout
37(2)
2.2.5 Fault Tolerance and Generalization
39(1)
2.2.6 Sparsity Versus Stability
40(1)
2.3 Model Selection
40(5)
2.3.1 Cross-Validation
41(2)
2.3.2 Complexity Criteria
43(2)
2.4 Bias and Variance
45(2)
2.5 Criterion Functions
47(2)
2.6 Robust Learning
49(2)
2.7 Neural Networks as Universal Machines
51(7)
2.7.1 Boolean Function Approximation
51(2)
2.7.2 Linear Separability and Nonlinear Separability
53(2)
2.7.3 Continuous Function Approximation
55(1)
2.7.4 Winner-Takes-All
56(2)
References
58(7)
3 Elements of Computational Learning Theory
65(16)
3.1 Introduction
65(1)
3.2 Probably Approximately Correct (PAC) Learning
66(2)
3.2.1 Sample Complexity
67(1)
3.3 Vapnik-Chervonenkis Dimension
68(2)
3.3.1 Teaching Dimension
70(1)
3.4 Rademacher Complexity
70(2)
3.5 Empirical Risk-Minimization Principle
72(3)
3.5.1 Function Approximation, Regularization, and Risk Minimization
74(1)
3.6 Fundamental Theorem of Learning Theory
75(1)
3.7 No-Free-Lunch Theorem
76(1)
References
77(4)
4 Perceptrons
81(16)
4.1 One-Neuron Perceptron
81(1)
4.2 Single-Layer Perceptron
82(1)
4.3 Perceptron Learning Algorithm
83(2)
4.4 Least Mean Squares (LMS) Algorithm
85(3)
4.5 P-Delta Rule
88(1)
4.6 Other Learning Algorithms
89(4)
References
93(4)
5 Multilayer Perceptrons: Architecture and Error Backpropagation
97(46)
5.1 Introduction
97(1)
5.2 Universal Approximation
98(1)
5.3 Backpropagation Learning Algorithm
99(5)
5.4 Incremental Learning Versus Batch Learning
104(5)
5.5 Activation Functions for the Output Layer
109(1)
5.6 Optimizing Network Structure
110(7)
5.6.1 Network Pruning Using Sensitivity Analysis
110(3)
5.6.2 Network Pruning Using Regularization
113(2)
5.6.3 Network Growing
115(2)
5.7 Speeding Up Learning Process
117(10)
5.7.1 Eliminating Premature Saturation
117(2)
5.7.2 Adapting Learning Parameters
119(4)
5.7.3 Initializing Weights
123(1)
5.7.4 Adapting Activation Function
124(3)
5.8 Some Improved BP Algorithms
127(3)
5.8.1 BP with Global Descent
128(1)
5.8.2 Robust BP Algorithms
129(1)
5.9 Resilient Propagation (Rprop)
130(2)
5.10 Spiking Neural Network Learning
132(3)
References
135(8)
6 Multilayer Perceptrons: Other Learing Techniques
143(30)
6.1 Introduction to Second-Order Learning Methods
143(1)
6.2 Newton's Methods
144(5)
6.2.1 Gauss-Newton Method
145(1)
6.2.2 Levenberg-Marquardt Method
146(3)
6.3 Quasi-Newton Methods
149(3)
6.3.1 BFGS Method
150(2)
6.3.2 One-Step Secant Method
152(1)
6.4 Conjugate Gradient Methods
152(5)
6.5 Extended Kalman Filtering Methods
157(2)
6.6 Recursive Least Squares
159(1)
6.7 Natural-Gradient-Descent Method
160(1)
6.8 Other Learning Algorithms
161(1)
6.8.1 Layerwise Linear Learning
161(1)
6.9 Escaping Local Minima
162(1)
6.10 Complex-Valued MLPs and Their Learning
163(5)
6.10.1 Split Complex BP
164(1)
6.10.2 Fully Complex BP
164(4)
References
168(5)
7 Hopfield Networks, Simulated Annealing, and Chaotic Neural Networks
173(28)
7.1 Hopfield Model
173(3)
7.2 Continuous-Time Hopfield Network
176(3)
7.3 Simulated Annealing
179(3)
7.4 Hopfield Networks for Optimization
182(7)
7.4.1 Combinatorial Optimization Problems
183(4)
7.4.2 Escaping Local Minima
187(1)
7.4.3 Solving Other Optimization Problems
188(1)
7.5 Chaos and Chaotic Neural Networks
189(4)
7.5.1 Chaos, Bifurcation, and Fractals
189(1)
7.5.2 Chaotic Neural Networks
190(3)
7.6 Multistate Hopfield Networks
193(1)
7.7 Cellular Neural Networks
194(3)
References
197(4)
8 Associative Memory Networks
201(30)
8.1 Introduction
201(2)
8.2 Hopfield Model: Storage and Retrieval
203(4)
8.2.1 Generalized Hebbian Rule
203(2)
8.2.2 Pseudoinverse Rule
205(1)
8.2.3 Perceptron-Type Learning Rule
205(1)
8.2.4 Retrieval Stage
206(1)
8.3 Storage Capability of Hopfield Model
207(5)
8.4 Increasing Storage Capacity
212(2)
8.5 Multistate Hopfield Networks as Associative Memories
214(1)
8.6 Multilayer Perceptrons as Associative Memories
215(2)
8.7 Hamming Network
217(2)
8.8 Bidirectional Associative Memories
219(1)
8.9 Cohen-Grossberg Model
220(1)
8.10 Cellular Networks as Associative Memories
221(5)
References
226(5)
9 Clustering I: Basic Clustering Models and Algorithms
231(44)
9.1 Vector Quantization
231(1)
9.2 Competitive Learning
232(2)
9.3 Self-Organizing Maps
234(10)
9.3.1 Kohonen Network
235(1)
9.3.2 Basic Self-Organizing Maps
236(8)
9.4 Learning Vector Quantization
244(2)
9.5 Nearest Neighbor Algorithms
246(3)
9.6 Neural Gas
249(3)
9.7 ART Networks
252(4)
9.7.1 ART Models
253(1)
9.7.2 ART 1
254(2)
9.8 C-Means Clustering
256(3)
9.9 Subtractive Clustering
259(3)
9.10 Fuzzy Clustering
262(7)
9.10.1 Fuzzy C-Means Clustering
262(3)
9.10.2 Other Fuzzy Clustering Algorithms
265(4)
References
269(6)
10 Clustering II: Topics in Clustering
275(40)
10.1 Underutilization Problem
275(5)
10.1.1 Competitive Learning with Conscience
275(2)
10.1.2 Rival Penalized Competitive Learning
277(2)
10.1.3 Soft-Competitive Learning
279(1)
10.2 Robust Clustering
280(4)
10.2.1 Possibilistic C-Means
282(1)
10.2.2 A Unified Framework for Robust Clustering
283(1)
10.3 Supervised Clustering
284(1)
10.4 Clustering Using Non-Euclidean Distance Measures
285(2)
10.5 Partitional, Hierarchical, and Density-Based Clustering
287(1)
10.6 Hierarchical Clustering
288(8)
10.6.1 Distance Measures, Cluster Representations, and Dendrograms
288(2)
10.6.2 Minimum Spanning Tree (MST) Clustering
290(2)
10.6.3 BIRCH, CURE, CHAMELEON, and DBSCAN
292(3)
10.6.4 Hybrid Hierarchical/Partitional Clustering
295(1)
10.7 Constructive Clustering Techniques
296(2)
10.8 Cluster Validity
298(5)
10.8.1 Measures Based on Compactness and Separation of Clusters
299(1)
10.8.2 Measures Based on Hypervolume and Density of Clusters
300(1)
10.8.3 Crisp Silhouette and Fuzzy Silhouette
301(2)
10.9 Projected Clustering
303(1)
10.10 Spectral Clustering
304(1)
10.11 Coclustering
305(1)
10.12 Handling Qualitative Data
306(1)
10.13 Bibliographical Notes
307(1)
References
308(7)
11 Radial Basis Function Networks
315(36)
11.1 Introduction
315(2)
11.2 RBF Network Architecture
317(1)
11.3 Universal Approximation of RBF Networks
318(1)
11.4 Formulation for RBF Network Learning
319(1)
11.5 Radial Basis Functions
320(3)
11.6 Learning RBF Centers
323(2)
11.7 Learning the Weights
325(2)
11.7.1 Least Squares Methods for Weights Learning
325(2)
11.8 RBF Network Learning Using Orthogonal Least Squares
327(2)
11.9 Supervised Learning of All Parameters
329(3)
11.9.1 Supervised Learning for General RBF Networks
329(1)
11.9.2 Supervised Learning for Gaussian RBF Networks
330(1)
11.9.3 Discussion on Supervised Learning
331(1)
11.10 Various Learning Methods
332(2)
11.11 Normalized RBF Networks
334(1)
11.12 Optimizing Network Structure
335(4)
11.12.1 Constructive Methods
335(2)
11.12.2 Resource-Allocating Networks
337(2)
11.12.3 Pruning Methods
339(1)
11.13 Complex RBF Networks
339(2)
11.14 A Comparison of RBF Networks and MLPs
341(4)
References
345(6)
12 Recurrent Neural Networks
351(22)
12.1 Introduction
351(2)
12.2 Fully Connected Recurrent Networks
353(1)
12.3 Time-Delay Neural Networks
354(3)
12.4 Backpropagation for Temporal Learning
357(3)
12.5 RBF Networks for Modeling Dynamic Systems
360(1)
12.6 Some Recurrent Models
360(2)
12.7 Reservoir Computing
362(6)
References
368(5)
13 Principal Component Analysis
373(54)
13.1 Introduction
373(3)
13.1.1 Hebbian Learning Rule
374(1)
13.1.2 Oja's Learning Rule
375(1)
13.2 PC A: Conception and Model
376(3)
13.3 Hebbian Rule-Based PCA
379(6)
13.3.1 Subspace Learning Algorithms
379(4)
13.3.2 Generalized Hebbian Algorithm
383(2)
13.4 Least Mean Squared Error-Based PCA
385(5)
13.4.1 Other Optimization-Based PCA
389(1)
13.5 Anti-Hebbian Rule-Based PCA
390(5)
13.5.1 APEX Algorithm
391(4)
13.6 Nonlinear PCA
395(3)
13.6.1 Autoassociative Network-Based Nonlinear PCA
396(2)
13.7 Minor Component Analysis
398(3)
13.7.1 Extracting the First Minor Component
398(1)
13.7.2 Self-Stabilizing Minor Component Analysis
399(1)
13.7.3 Oja-BasedMCA
400(1)
13.7.4 Other Algorithms
400(1)
13.8 Constrained PCA
401(2)
13.8.1 Sparse PCA
402(1)
13.9 Localized PCA, Incremental PCA, and Supervised PCA
403(2)
13.10 Complex-Valued PCA
405(1)
13.11 Two-Dimensional PCA
406(1)
13.12 Generalized Eigenvalue Decomposition
407(2)
13.13 Singular Value Decomposition
409(5)
13.13.1 Cross-Correlation Asymmetric PCA Networks
409(3)
13.13.2 Extracting Principal Singular Components for Nonsquare Matrices
412(1)
13.13.3 Extracting Multiple Principal Singular Components
413(1)
13.14 Factor Analysis
414(1)
13.15 Canonical Correlation Analysis
415(3)
References
418(9)
14 Nonnegative Matrix Factorization
427(20)
14.1 Introduction
427(2)
14.2 Algorithms for NMF
429(3)
14.2.1 Multiplicative Update Algorithm and Alternating Nonnegative Least Squares
429(3)
14.3 Other NMF Methods
432(6)
14.3.1 NMF Methods for Clustering
435(2)
14.3.2 Concept Factorization
437(1)
14.4 Nystrom Method
438(2)
14.5 CUR Decomposition
440(1)
References
441(6)
15 Independent Component Analysis
447(36)
15.1 Introduction
447(1)
15.2 ICA Model
448(1)
15.3 Approaches to ICA
449(2)
15.4 Popular ICA Algorithms
451(8)
15.4.1 Infomax ICA
451(2)
15.4.2 EASI, JADE, and Natural Gradient ICA
453(1)
15.4.3 FastICA Algorithm
454(5)
15.5 ICA Networks
459(3)
15.6 Some BSS Methods
462(6)
15.6.1 Nonlinear ICA
462(1)
15.6.2 Constrained ICA
462(1)
15.6.3 Nonnegativity ICA
463(1)
15.6.4 ICA for Convolutive Mixtures
464(1)
15.6.5 Other BSS/ICA Methods
465(3)
15.7 Complex-Valued ICA
468(2)
15.8 Source Separation for Time Series
470(2)
15.9 EEG, MEG, and fMRI
472(4)
References
476(7)
16 Discriminant Analysis
483(20)
16.1 Linear Discriminant Analysis
483(4)
16.2 Solving Small Sample Size Problem
487(1)
16.3 Fisherfaces
487(1)
16.4 Regularized LDA
488(2)
16.5 Uncorrected LDA and Orthogonal LDA
490(1)
16.6 LDA/GSVD and LDA/QR
491(1)
16.7 Incremental LDA
492(1)
16.8 Other Discriminant Methods
493(2)
16.9 Nonlinear Discriminant Analysis
495(2)
16.10 Two-Dimensional Discriminant Analysis
497(1)
References
498(5)
17 Reinforcement Learning
503(22)
17.1 Introduction
503(2)
17.2 Learning Through Awards
505(2)
17.3 Actor-Critic Model
507(2)
17.4 Model-Free and Model-Based Reinforcement Learning
509(3)
17.5 Learning from Demonstrations
512(1)
17.6 Temporal-Difference Learning
513(3)
17.6.1 TD(A)
514(1)
17.6.2 Sarsa(A)
515(1)
17.7 G-Learning
516(2)
17.8 Multiagent Reinforcement Learning
518(3)
17.8.1 Equilibrium-Based Multiagent Reinforcement Learning
519(1)
17.8.2 Learning Automata
520(1)
References
521(4)
18 Compressed Sensing and Dictionary Learning
525(24)
18.1 Introduction
525(1)
18.2 Compressed Sensing
526(9)
18.2.1 Restricted Isometry Property
527(1)
18.2.2 Sparse Recovery
528(2)
18.2.3 Iterative Hard Thresholding
530(2)
18.2.4 Orthogonal Matching Pursuit
532(1)
18.2.5 Restricted Isometry Property for Signal Recovery Methods
533(2)
18.2.6 Tensor Compressive Sensing
535(1)
18.3 Sparse Coding and Dictionary Learning
535(3)
18.4 LASSO
538(2)
18.5 Other Sparse Algorithms
540(1)
References
541(8)
19 Matrix Completion
549(20)
19.1 Introduction
549(1)
19.2 Matrix Completion
550(7)
19.2.1 Minimizing the Nuclear Norm
551(2)
19.2.2 Matrix Factorization-Based Methods
553(1)
19.2.3 Theoretical Guarantees on Exact Matrix Completion
554(2)
19.2.4 Discrete Matrix Completion
556(1)
19.3 Low-Rank Representation
557(1)
19.4 Tensor Factorization and Tensor Completion
558(5)
19.4.1 Tensor Factorization
560(1)
19.4.2 Tensor Completion
561(2)
References
563(6)
20 Kernel Methods
569(24)
20.1 Introduction
569(1)
20.2 Kernel Functions and Representer Theorem
570(2)
20.3 Kernel PCA
572(4)
20.4 Kernel LDA
576(2)
20.5 Kernel Clustering
578(1)
20.6 Kernel Auto-associators, Kernel CCA, and Kernel ICA
579(2)
20.7 Other Kernel Methods
581(2)
20.7.1 Random Kitchen Sinks and Fastfood
583(1)
20.8 Multiple Kernel Learning
583(3)
References
586(7)
21 Support Vector Machines
593(52)
21.1 Introduction
593(1)
21.2 SVM Model
594(3)
21.2.1 SVM Versus Neural Networks
597(1)
21.3 Solving the Quadratic Programming Problem
597(6)
21.3.1 Chunking
599(1)
21.3.2 Decomposition
599(4)
21.3.3 Convergence of Decomposition Methods
603(1)
21.4 Least Squares SVMs
603(3)
21.5 S VM Training Methods
606(9)
21.5.1 SVM Algorithms with Reduced Kernel Matrix
606(2)
21.5.2 V-SVM
608(1)
21.5.3 Cutting-Plane Technique
609(1)
21.5.4 Gradient-Based Methods
610(1)
21.5.5 Training SVM in the Primal Formulation
610(2)
21.5.6 Clustering-Based SVM
612(1)
21.5.7 Other SVM Methods
613(2)
21.6 Pruning SVMs
615(2)
21.7 Multiclass SVMs
617(2)
21.8 Support Vector Regression
619(5)
21.8.1 Solving Support Vector Regression
621(3)
21.9 Support Vector Clustering
624(3)
21.10 SVMs for One-Class Classification
627(1)
21.11 Incremental SVMs
628(2)
21.12 SVMs for Active, Transductive, and Semi-supervised Learnings
630(3)
21.12.1 SVMs for Active Learning
630(1)
21.12.2 SVMs for Transductive or Semi-supervised Learning
630(3)
21.13 Solving SVM with Indefinite Matrices
633(2)
References
635(10)
22 Probabilistic and Bayesian Networks
645(54)
22.1 Introduction
645(4)
22.1.1 Classical Versus Bayesian Approach
646(1)
22.1.2 Bayes' Theorem and Bayesian Classifiers
647(1)
22.1.3 Graphical Models
648(1)
22.2 Bayesian Network Model
649(3)
22.3 Learning Bayesian Networks
652(8)
22.3.1 Learning the Structure
653(4)
22.3.2 Learning the Parameters
657(2)
22.3.3 Constraint-Handling
659(1)
22.4 Bayesian Network Inference
660(6)
22.4.1 Belief Propagation
660(3)
22.4.2 Factor Graphs and Belief Propagation Algorithm
663(3)
22.5 Sampling (Monte Carlo) Methods
666(4)
22.5.1 Gibbs Sampling
667(2)
22.5.2 Importance Sampling
669(1)
22.5.3 Particle Filtering
669(1)
22.6 Variational Bayesian Methods
670(2)
22.7 Hidden Markov Models
672(3)
22.8 Dynamic Bayesian Networks
675(1)
22.9 Expectation-Maximization Method
676(2)
22.10 Mixture Models
678(1)
22.11 Bayesian and Probabilistic Approach to Machine Learning
679(10)
22.11.1 Probabilistic PCA
681(1)
22.11.2 Probabilistic Clustering
682(1)
22.11.3 Probabilistic ICA
683(2)
22.11.4 Probabilisitic Approach to SVM
685(1)
22.11.5 Relevance Vector Machines
685(4)
References
689(10)
23 Boltzmann Machines
699(18)
23.1 Boltzmann Machines
699(4)
23.1.1 Boltzmann Learning Algorithm
701(2)
23.2 Restricted Boltzmann Machines
703(6)
23.2.1 Universal Approximation
705(1)
23.2.2 Contrastive Divergence Algorithm
706(2)
23.2.3 Related Methods
708(1)
23.3 Mean-Field-Theory Machine
709(2)
23.4 Stochastic Hopfield Networks
711(1)
References
712(5)
24 Deep Learning
717(20)
24.1 Introduction
717(2)
24.2 Deep Neural Networks
719(2)
24.2.1 Deep Networks Versus Shallow Networks
720(1)
24.3 Deep Belief Networks
721(2)
24.3.1 Training Deep Belief Networks
722(1)
24.4 Deep Autoencoders
723(1)
24.5 Deep Convolutional Neural Networks
724(5)
24.5.1 Solving the Difficulties of Gradient Descent
725(1)
24.5.2 Implementing Deep Convolutional Neural Networks
726(3)
24.6 Deep Reinforcement Learning
729(1)
24.7 Other Deep Neural Network Methods
730(2)
References
732(5)
25 Combining Multiple Learners: Data Fusion and Ensemble Learning
737(32)
25.1 Introduction
737(3)
25.1.1 Ensemble Learning Methods
738(1)
25.1.2 Aggregation
739(1)
25.2 Majority Voting
740(1)
25.3 Bagging
741(2)
25.4 Boosting
743(5)
25.4.1 AdaBoost
744(2)
25.4.2 Other Boosting Algorithms
746(2)
25.5 Random Forests
748(3)
25.5.1 AdaBoost Versus Random Forests
750(1)
25.6 Topics in Ensemble Learning
751(3)
25.6.1 Ensemble Neural Networks
751(1)
25.6.2 Diversity Versus Ensemble Accuracy
752(1)
25.6.3 Theoretical Analysis
753(1)
25.6.4 Ensembles for Streams
753(1)
25.7 Solving Multiclass Classification
754(4)
25.7.1 One-Against-All Strategy
754(1)
25.7.2 One-Against-One Strategy
755(1)
25.7.3 Error-Correcting Output Codes (ECOCs)
756(2)
25.8 Dempster-Shafer Theory of Evidence
758(4)
References
762(7)
26 Introduction to Fuzzy Sets and Logic
769(34)
26.1 Introduction
769(1)
26.2 Definitions and Terminologies
770(6)
26.3 Membership Function
776(1)
26.4 Intersection, Union and Negation
777(2)
26.5 Fuzzy Relation and Aggregation
779(2)
26.6 Fuzzy Implication
781(1)
26.7 Reasoning and Fuzzy Reasoning
782(4)
26.7.1 Modus Ponens and Modus Tollens
783(1)
26.7.2 Generalized Modus Ponens
784(1)
26.7.3 Fuzzy Reasoning Methods
785(1)
26.8 Fuzzy Inference Systems
786(3)
26.8.1 Fuzzy Rules and Fuzzy Interference
787(1)
26.8.2 Fuzzification and Defuzzification
788(1)
26.9 Fuzzy Models
789(3)
26.9.1 Mamdani Model
789(1)
26.9.2 Takagi-Sugeno-Kang Model
790(2)
26.10 Complex Fuzzy Logic
792(1)
26.11 Possibility Theory
793(2)
26.12 Case-Based Reasoning
795(1)
26.13 Granular Computing and Ontology
795(4)
References
799(4)
27 Neurofuzzy Systems
803(26)
27.1 Introduction
803(2)
27.1.1 Interpretability
804(1)
27.2 Rule Extraction from Trained Neural Networks
805(4)
27.2.1 Fuzzy Rules and Multilayer Perceptrons
805(1)
27.2.2 Fuzzy Rules and RBF Networks
806(1)
27.2.3 Rule Extraction from SVMs
807(1)
27.2.4 Rule Generation from Other Neural Networks
808(1)
27.3 Extracting Rules from Numerical Data
809(3)
27.3.1 Rule Generation Based on Fuzzy Partitioning
809(2)
27.3.2 Other Methods
811(1)
27.4 Synergy of Fuzzy Logic and Neural Networks
812(1)
27.5 ANFIS Model
813(6)
27.6 Generic Fuzzy Perceptron
819(2)
27.7 Fuzzy SVMs
821(1)
27.8 Other Neurofuzzy Models
822(3)
References
825(4)
28 Neural Network Circuits and Parallel Implementations
829(24)
28.1 Introduction
829(2)
28.2 Hardware/Software Codesign
831(1)
28.3 Topics in Digital Circuit Designs
832(1)
28.4 Circuits for Neural Networks
833(7)
28.4.1 Memristor
833(2)
28.4.2 Circuits for MLPs
835(1)
28.4.3 Circuits for RBF Networks
836(1)
28.4.4 Circuits for Clustering
837(1)
28.4.5 Circuits for SVMs
837(1)
28.4.6 Circuits for Other Neural Network Models
838(1)
28.4.7 Circuits for Fuzzy Neural Models
839(1)
28.5 Graphic Processing Unit (GPU) Implementation
840(2)
28.6 Implementation Using Systolic Algorithms
842(1)
28.7 Implementation on Parallel Computers
843(3)
28.7.1 Distributed and Parallel SVMs
845(1)
References
846(7)
29 Pattern Recognition for Biometrics and Bioinformatics
853(18)
29.1 Biometrics
853(5)
29.1.1 Physiological Biometrics and Recognition
854(3)
29.1.2 Behavioral Biometrics and Recognition
857(1)
29.2 Face Detection and Recognition
858(4)
29.2.1 Face Detection
859(1)
29.2.2 Face Recognition
860(2)
29.3 Bioinformatics
862(7)
29.3.1 Microarray Technology
864(3)
29.3.2 Motif Discovery, Sequence Alignment, Protein Folding, and Coclustering
867(2)
References
869(2)
30 Data Mining
871(34)
30.1 Introduction
871(1)
30.2 Document Representations for Text Categorization
872(2)
30.3 Neural Network Approach to Data Mining
874(5)
30.3.1 Classification-Based Data Mining
874(1)
30.3.2 Clustering-Based Data Mining
875(3)
30.3.3 Bayesian Network-Based Data Mining
878(1)
30.4 XML Format
879(2)
30.5 Association Mining
881(1)
30.5.1 Affective Computing
881(1)
30.6 Web Usage Mining
882(1)
30.7 Ranking Search Results
883(6)
30.7.1 Surfer Models
884(1)
30.7.2 PageRank Algorithm
885(3)
30.7.3 Hypertext-Induced Topic Search (HITS)
888(1)
30.8 Personalized Search
889(2)
30.9 Data Warehousing
891(2)
30.10 Content-Based Image Retrieval
893(3)
30.11 E-mail Anti-spamming
896(1)
References
897(8)
31 Big Data, Cloud Computing, and Internet of Things
905(28)
31.1 Big Data
905(8)
31.1.1 Introduction to Big Data
905(1)
31.1.2 MapReduce
906(4)
31.1.3 Hadoop Software Stack
910(1)
31.1.4 Other Big Data Tools
911(1)
31.1.5 NoSQL Databases
912(1)
31.2 Cloud Computing
913(9)
31.2.1 Services Models, Pricing, and Standards
914(3)
31.2.2 Virtual Machines, Data Centers, and Intercloud Connections
917(3)
31.2.3 Cloud Infrastructure Requirements
920(2)
31.3 Internet of Things
922(5)
31.3.1 Architecture of IoT
922(2)
31.3.2 Cyber-Physical System Versus IoT
924(3)
31.4 Fog/Edge Computing
927(1)
31.5 Blockchain
928(2)
References
930(3)
Appendix A Mathematical Preliminaries 933(24)
Appendix B Benchmarks and Resources 957(22)
Index 979
Ke-Lin Du is currently the founder and CEO at Xonlink Inc., China. He is also an Affiliate Associate Professor at the Department of Electrical and Computer Engineering, Concordia University, Canada. In the past, he held positions at Huawei Technologies, the China Academy of Telecommunication Technology, the Chinese University of Hong Kong, the Hong Kong University of Science and Technology, Concordia University, and Enjoyor Inc. He has published four books and over 50 papers, and filed over 30 patents. A Senior Member of the IEEE, his current research interests include signal processing, neural networks, intelligent systems, and wireless communications.





 MNS Swamy is currently a Research Professor and holder of the Concordia Tier I Research Chair of Signal Processing at the Department of Electrical and Computer Engineering, Concordia University, where he was Dean of the Faculty of Engineering and ComputerScience from 1977 to 1993 and the founding Chair of the EE department. He has published extensively in the areas of circuits, systems and signal processing, and co-authored nine books and holds five patents. Professor Swamy is a Fellow of the IEEE, IET (UK) and EIC (Canada), and has received many IEEE-CAS awards, including the Guillemin-Cauer award in 1986, as well as the Education Award and the Golden Jubilee Medal, both in 2000.  He has been the Editor-in-Chief of the journal Circuits, Systems and Signal Processing (CSSP) since 1999. Recently, CSSP has instituted the Best Paper Award in his name.