Muutke küpsiste eelistusi

E-raamat: Introduction to Machine Learning Algorithms: Basic Principles and Mathematics

  • Formaat: EPUB+DRM
  • Ilmumisaeg: 09-Apr-2026
  • Kirjastus: Chapman & Hall/CRC
  • Keel: eng
  • ISBN-13: 9781040638774
  • Formaat - EPUB+DRM
  • Hind: 195,00 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 09-Apr-2026
  • Kirjastus: Chapman & Hall/CRC
  • Keel: eng
  • ISBN-13: 9781040638774

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Mathematics is the foundation of machine learning algorithms. To understand the shortcomings of existing algorithms and develop more effective methods, it is essential to understand the mathematical concepts underlying these algorithms and their operational principles. This book serves as an introductory resource, outlining the preliminary concepts and offering insights into the mathematical foundations and operational mechanisms of machine learning algorithms. It describes the basic equations and interrelates the questions arising during practical applications of machine learning with the basic mathematical picture of the algorithms used.

• Introduces machine learning, highlights the central role of algorithms in machine learning, and explains the core mathematical prerequisites to understanding machine learning algorithms

• Systematically examines the sequential steps of classical machine learning algorithms used for classification of data sets into distinct groups; regression, clustering analysis,

• Provides an overview of value, policy, and model-based reinforcement learning algorithms.

This book is for academicians, scholars, students, and professionals engaged in the study of machine learning and artificial intelligence.



Mathematics is the foundation of machine learning algorithms. To understand the shortcomings of existing algorithms and develop more effective methods, it is essential to understand the mathematical concepts underlying these algorithms and their operational principles.

1. Algorithms in Everyday Life and Computer Science. 1.1 Introduction.
1.2 Human Algorithmic Activities. 1.3 Importance of Algorithms in Our
Everyday Activities. 1.4 Algorithms as the Foundation of Computer Science.
1.5 Desirable Features and Properties of an Algorithm. 1.6 Critical Steps in
Algorithm Formulation. 1.7 Algorithm and Programming Language. 1.8 Discussion
and Conclusions. References and Further Reading.
2. Algorithmic Foundations
of Machine and Deep Learning. 2.1 Introduction. 2.2 Artificial Intelligence.
2.3 Machine Learning. 2.4 Machine Learning Algorithms. 2.5 Deep Learning
Algorithms. 2.6 Differentiation Between Traditional Computer Algorithms and
Machine/Deep Learning Algorithms. 2.7 Types of Machine Learning and Deep
Learning Algorithms. 2.8 Modus Operandi of a Typical Machine Learning
Algorithm. 2.9 Examples of Machine Learning and Deep Learning Algorithms.
2.10 Types of Artificial Neural Networks. 2.11 Why Are Machine Learning
Algorithms Considered as Problem-Solving Strategies and Mathematical Engines
of Artificial Intelligence? 2.12 Role of Ethics in AI and ML Development.
2.13 Organizational Plan of the Book. 2.14 Discussion and Conclusions.
References and Further Reading.
3. Classification Algorithms: Logistic
Regression. 3.1 Introduction. 3.2 Primary Function of a Classification
Algorithm. 3.3 Binary, Multi-Class, and Multi-Label Classifiers. 3.4
Performance Evaluation Metrics of Classification Algorithms. 3.5 Logistic
Regression Classifier. 3.6 Discussion and Conclusions. References and Further
Reading.
4. Decision Trees and Random Forest Classifiers. 4.1 Introduction.
4.2 Decision Tree. 4.3 Random Forest Classification Algorithm. 4.4 Comparison
Between Decision Tree and Random Forest Classifiers. 4.5 Discussion and
Conclusions. References and Further Reading.
5. Support Vector Machines,
K-Nearest Neighbors, and Naïve Bayes' Classifier Algorithms. 5.1
Introduction. 5.2 Support Vector Machine. 5.3 SVM Terminology. 5.4 Main Steps
of an SVM Classifier. 5.5 K-Nearest Neighbors Algorithm. 5.6 Naïve Bayes
Classifier. 5.7 Comparison between Support Vector Machines, K-Nearest
Neighbors, and Naïve Bayes' Classifier Algorithms. 5.8 Discussion and
Conclusions. References and Further Reading.
6. Regression Analysis: Linear,
Multiple Linear, and Nonlinear Regression. 6.1 Introduction. 6.2 The
Regression Algorithm. 6.3 Regression Compared with Classification. 6.4 Linear
Regression Algorithms. 6.5 Non-Linear Regression Algorithms. 6.6 Regression
Lines and Related Terms. 6.7 Evaluation Metrics of Linear Regression. 6.8
Assumptions of Linear Regression. 6.9 Least Squares Regression. 6.10 Finding
the Best Fit Line in Linear Regression by Gradient Descent. 6.11 Comparison
between Simple Linear, Multiple Linear, and Polynomial Regression. 6.12
Discussion and Conclusions. References and Further Reading.
7. Lasso, Ridge,
and Support Vector Regression. 7.1 Introduction. 7.2 Lasso Regression
Algorithm (L1 Regularization). 7.3 Ridge Regression Algorithm (L2
Regularization). 7.4 Elastic Net Regression Algorithm. 7.5 Support Vector
Regression (SVR) Algorithm. 7.6 Comparison Between Lasso, Ridge, and Support
Vector Regression. 7.7 Discussion and Conclusions. References and Further
Reading.
8. Miscellaneous Regression Algorithms: Decision Tree, Random
Forest, KNN Regression, and Others. 8.1 Introduction. 8.2 Decision Tree
Regression Algorithm. 8.3 Random Forest Regression Algorithm. 8.4 K-Nearest
Neighbors (KNN) Regression Algorithm. 8.5 Gradient Boosting Regression
Algorithm. 8.6 Gaussian Process Regression (GPR) Algorithm. 8.7 Comparison
Between Decision Tree, Random Forest, KNN Regression, Gradient Boosting, and
Gaussian Process Regression Algorithms. 8.8 Discussion and Conclusions.
References and Further Reading.
9. Clustering Algorithms: Centroid-Based,
Density-Based, Distribution-Based, and Hierarchical. 9.1 Introduction.
9. 2
Clustering Analysis. 9.3 Clustering in Comparison to Regression. 9.4 Types of
Clustering. 9.5 Centroid-Based Clustering or Partitioning. 9.6 Density-Based
Clustering. 9.7 Distribution-Based Clustering. 9.8 Hierarchical or
Connectivity-Based Clustering. 9.8 Comparison Between Centroid-Based,
Density-Based, Distribution-Based, and Hierarchical Clustering Algorithms.
9.9 Discussion and Conclusions. References and Further Reading.
10. Affinity
Propagation, Fuzzy Clustering, and OPTICS. 10.1 Introduction. 10.2 Affinity
Propagation. 10.3 Fuzzy Clustering. 10.4 OPTICS. 10.5 Comparison Between
Affinity Propagation, Fuzzy Clustering, and OPTICS. 10.6 Discussion and
Conclusions. References and Further Reading.
11. Feature Selection
Algorithms: Filter, Wrapper, and Embedded Methods. 11.1 Introduction. 11.2
Definitions. 11.3 Feature Selection Compared to Clustering. 11.4 Feature
Selection Contrasted with Dimensionality Reduction. 11.5 Feature Selection
Methods. 11.6 Feature Selection with Filter Methods. 11.7 A Wrapper-Style
Algorithm: Recursive Feature Elimination (RFE). 11.8 Feature Selection with
Embedded Methods. 11.9 Lasso and Ridge Regression. 11.10 Comparison Between
Filter, Wrapper, and Embedded Methods for Feature Selection. 11.11 Discussion
and Conclusions. References and Further Reading.
12. Feature Extraction
Algorithms: Principal Component Analysis and Linear Discriminant Analysis.
12.1 Introduction. 12.2 Definitions. 12.3 Feature Extraction as Opposed to
Feature Selection. 12.4 Feature Extraction in Opposition to Dimensionality
Reduction. 12.5 Principal Component Analysis. 12.6 Linear Discriminant
Analysis. 12.7 Comparison Between Principal Component Analysis and Linear
Discriminant Analysis. 12.8 Discussion and Conclusions. References and
Further Reading.
13. Feed Forward Neural Networks and Self-Organizing Maps.
13.1 Introduction. 13.2 Feed Forward Neural Network. 13.3 Self-Organizing
Map. 13.4 Comparison Between Feed-Forward Neural Networks and Self-Organizing
Maps. 13.5 Discussion and Conclusions. References and Further Reading.
14.
Perceptron, Multilayer Perceptron, and Radial Basis Function Networks. 14.1
Introduction. 14.2 Single-Layer Perceptron. 14.3 Multilayer Perceptron. 14.4
Radial Basis Function Network. 14.5 Comparison Between Perceptron, Multilayer
Perceptron, and Radial Basis Function Network. 14.6 Discussion and
Conclusions. References and Further Reading.
15. Convolution Neural Networks.
15.1 Introduction. 15.2 Salient Characteristics of Convolutional Neural
Network. 15.3 Architecture of a CNN and Functions of Different Layers. 15.4
Training a CNN Algorithm. 15.5 Using a Trained CNN Algorithm. 15.6
Applications of CNN. 15.7 Advantages of CNN. 15.8 Disadvantages of CNN. 15.9
Discussion and Conclusions. References and Further Reading.
16. Recurrent,
Long Short-Term Memory and Transformer Networks. 16.1 Introduction. 16.2
Recurrent Neural Network. 16.3 Long Short-Term Memory Network. 16.4
Transformer Neural Network. 16.5 Comparison Between Recurrent, Long
Short-Term Memory, and Transformer Networks. 16.6 Discussion and Conclusions.
References and Further Reading.
17. Restricted Boltzmann Machine, and Deep
Belief Network. 17.1 Introduction. 17.2 Elementary Ideas About the Restricted
Boltzmann Machine. 17.3 Training a Restricted Boltzmann Machine. 17.4 Using a
Trained RBM for Inference. 17.5 Applications of RBM. 17.6 Advantages of RBM.
17.7 Disadvantages of RBM. 17.8 Basic Ideas About the Deep Belief Network.
17.9 Applications of DBN. 17.10 Advantages of DBN. 17.11 Disadvantages of
DBN. 17.12 Comparison Between Restricted Boltzmann Machine and Deep Belief
Network. 17.13 Discussion and Conclusions. References and Further Reading.
18. Generative Adversarial Networks. 18.1 Introduction. 18.2 Architecture of
a Generative Adversarial Network. 18.3 MinMax GAN Loss Function Formula. 18.4
Main Steps of GAN. 18.5 Applications of GANs. 18.6 Advantages of GANs. 18.7
Limitations of GANs. 18.8 Discussion and Conclusions. References and Further
Reading.
19. Autoencoders. 19.1 Introduction. 19.2 Architecture of the
Autoencoder and the Roles Played by the Layers. 19.3 Training Process of
Autoencoder Algorithm. 19.4 Using a Trained Autoencoder Algorithm. 19.5
Applications of Autoencoders. 19.6 Advantages of Autoencoders. 19.7
Disadvantages of Autoencoders. 19.8 Discussion and Conclusions. References
and Further Reading.
20. Modular Neural Networks. 20.1 Introduction. 20.2
Mathematical Representation of a Modular Neural Network. 20.3 Main Steps in
Designing and Implementing a Modular Neural Network. 20.4 Applications of
Modular Neural Networks. 20.5 Advantages of Modular Neural Networks. 20.6
Disadvantages of Modular Neural Networks. 20.7 Comparison of a Modular Neural
Network with a Single Large Neural Network. 20.8 Discussion and Conclusions.
References and Further Reading. 21 Value-Based, Policy-Based, and Model-Based
Reinforcement Learning Algorithms. 21.1 Introduction. 21.2 Value-Based
Reinforcement Learning. 21.3 Policy-Based Reinforcement Learning. 21.4
Model-Based Reinforcement Learning. 21.5 Hybrid Reinforcement Learning. 21.6
Comparison Between Value-Based, Policy-Based, and Model-based Reinforcement
Learning Algorithms. 21.7 Discussion and Conclusions. References and Further
Reading. Glossary-A: Key Terminology of Machine Learning Algorithms.
Glossary-B: Brief Rundown of Machine Learning Algorithms and Related Methods.
Index.
Vinod Kumar Khanna, Ph.D. (Physics) is an emeritus scientist, Council of Scientific and Industrial Research (CSIR), India, and Emeritus Professor, Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India; a retired chief scientist, CSIR-Central Electronics Engineering Research Institute, Pilani, India and professor, AcSIR, India. He has worked for more than 37 years on the design, fabrication, and characterization of power semiconductor devices, MEMS, and nanotechnology-based sensors. He has published 194 research papers in refereed journals and conference proceedings, 23 books, and six chapters in edited books. He has five patents to his credit, including two US and three Indian patents.