Muutke küpsiste eelistusi

Introduction To Linear Algebra: Computation, Application, and Theory [Kõva köide]

(Manhattan College, USA)
  • Formaat: Hardback, 420 pages, kõrgus x laius: 254x178 mm, kaal: 866 g, 65 Line drawings, black and white; 1 Halftones, black and white; 66 Illustrations, black and white
  • Sari: Textbooks in Mathematics
  • Ilmumisaeg: 15-Mar-2022
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 1032108983
  • ISBN-13: 9781032108988
  • Formaat: Hardback, 420 pages, kõrgus x laius: 254x178 mm, kaal: 866 g, 65 Line drawings, black and white; 1 Halftones, black and white; 66 Illustrations, black and white
  • Sari: Textbooks in Mathematics
  • Ilmumisaeg: 15-Mar-2022
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 1032108983
  • ISBN-13: 9781032108988
Introduction to Linear Algebra: Computation, Application, and Theory is designed for students who have never been exposed to the topics in a linear algebra course. The text is lled with interesting and diverse application sections but is also a theoretical text which aims to train students to do succinct computation in a knowledgeable way. After completing the course with this text, the student will not only know the best and shortest way to do linear algebraic computations but will also know why such computations are both eective and successful.

Features:











Includes cutting edge applications in machine learning and data analytics





Suitable as a primary text for undergraduates studying linear algebra





Requires very little in the way of pre-requisites

Arvustused

"Exceptionally well organized and thoroughly 'student friendly' in presentation, Introduction To Linear Algebra: Computation, Application, and Theory is an ideal textbook for highschool, college, and university curriculums" - Midwest Books Review

Preface xi
Chapter 1 Examples of Vector Spaces
1(26)
1.1 First Vector Space: Tuples
1(5)
1.2 Dot Product
6(5)
1.3 Application: Geometry
11(4)
1.4 Second Vector Space: Matrices
15(6)
1.4.1 Special Matrix Families
17(4)
1.5 Matrix Multiplication
21(6)
Chapter 2 Matrices and Linear Systems
27(92)
2.1 Systems Of Linear Equations
27(5)
2.2 Gaussian Elimination
32(10)
2.3 Application: Markov Chains
42(8)
2.4 Application: The Simplex Method
50(17)
2.5 Elementary Matrices and Matrix Equivalence
67(7)
2.6 Inverse Of A Matrix
74(8)
2.7 Application: The Simplex Method Revisited
82(8)
2.8 Homogeneous/Non-Homogeneous Systems and Rank
90(5)
2.9 Determinant
95(9)
2.10 Applications of the Determinant
104(10)
2.11 Application: Lu Factorization
114(5)
Chapter 3 Vector Spaces
119(70)
3.1 Definition and Examples
119(11)
3.2 Subspace
130(7)
3.3 Linear Independence
137(11)
3.4 Span
148(8)
3.5 Basis and Dimension
156(18)
3.6 Subspaces Associated with a Matrix
174(5)
3.7 Application: Dimension Theorems
179(10)
Chapter 4 Linear Transformations
189(82)
4.1 Definition and Examples
189(7)
4.2 Kernel and Image
196(10)
4.3 Matrix Representation
206(13)
4.4 Inverse and Isomorphism
219(10)
4.4.1 Background
219(1)
4.4.2 Inverse
220(3)
4.4.3 Isomorphism
223(6)
4.5 Similarity of Matrices
229(4)
4.6 Eigenvalues and Diagonalization
233(12)
4.7 Axiomatic Determinant
245(5)
4.8 Quotient Vector Space
250(11)
4.8.1 Equivalence Relations
250(5)
4.8.2 Introduction to Quotient Spaces
255(4)
4.8.3 Applications of Quotient Spaces
259(2)
4.9 Dual Vector Space
261(10)
Chapter 5 Inner Product Spaces
271(66)
5.1 Definition, Examples, and Properties
271(5)
5.2 Orthogonal and Orthonormal
276(7)
5.3 Orthogonal Matrices
283(7)
5.3.1 Definition and Results
283(2)
5.3.2 Application: Rotations and Reflections
285(5)
5.4 Application: Qr Factorization
290(8)
5.5 Schur Triangularization Theorem
298(6)
5.6 Orthogonal Projections and Best Approximation
304(7)
5.7 Real Symmetric Matrices
311(3)
5.8 Singular Value Decomposition
314(5)
5.9 Application: Least Squares Optimization
319(18)
5.9.1 Overdetermined Systems
320(2)
5.9.2 Best Fitting Polynomial
322(6)
5.9.3 Linear Regression
328(2)
5.9.4 Underdetermined Systems
330(2)
5.9.5 Approximating Functions
332(5)
Chapter 6 Applications in Data Analytics
337(34)
6.1 Introduction
337(2)
6.2 Direction Of Maximal Spread
339(4)
6.3 Principal Component Analysis
343(4)
6.4 Dimensionality Reduction
347(3)
6.5 Mahalanobis Distance
350(3)
6.6 Data Sphering
353(2)
6.7 Fisher Linear Discriminant Function
355(7)
6.8 Linear Discriminant Functions in Feature Space
362(5)
6.9 Minimal Square Error Linear Discriminant Function
367(4)
Chapter 7 Quadratic Forms
371(26)
7.1 Introduction To Quadratic Forms
371(2)
7.2 Principal Minor Criterion
373(6)
7.3 Eigenvalue Criterion
379(3)
7.4 Application: Unconstrained Non-Linear Optimization
382(7)
7.5 General Quadratic Forms
389(8)
Appendix A Regular Matrices 397(4)
Appendix B Rotations and Reflections in Two Dimensions 401(4)
Appendix C Answers to Selected Exercises 405(10)
References 415(2)
Index 417
Mark J. DeBonis received his PhD in Mathematics from the University of California, Irvine, USA. He began his career as a theoretical mathematician in the field of group theory and model theory, but in later years switched to applied mathematics, in particular to machine learning. He spent some time working for the US Department of Energy at Los Alamos National Lab as well as the US Department of Defense at the Defense Intelligence Agency as an applied mathematician of machine learning. He is an Associate Professor of Mathematics at Manhattan College in New York City and is also currently working for the US Department of Energy at Sandia National Lab as a Principal Data Analyst. His research interests include machine learning, statistics, and computational algebra.