Muutke küpsiste eelistusi

E-raamat: Multilinear Subspace Learning: Dimensionality Reduction of Multidimensional Data

(Hong Kong Baptist Univeristy, Kowloon Tong, Hong Kong), (Ryerson University, Toronto, Canada), (University of Toronto, Ontario, Canada)
  • Formaat - EPUB+DRM
  • Hind: 64,99 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Raamatukogudele

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Due to advances in sensor, storage, and networking technologies, data is being generated on a daily basis at an ever-increasing pace in a wide range of applications, including cloud computing, mobile Internet, and medical imaging. This large multidimensional data requires more efficient dimensionality reduction schemes than the traditional techniques. Addressing this need, multilinear subspace learning (MSL) reduces the dimensionality of big data directly from its natural multidimensional representation, a tensor.

Multilinear Subspace Learning: Dimensionality Reduction of Multidimensional Data gives a comprehensive introduction to both theoretical and practical aspects of MSL for the dimensionality reduction of multidimensional data based on tensors. It covers the fundamentals, algorithms, and applications of MSL.

Emphasizing essential concepts and system-level perspectives, the authors provide a foundation for solving many of todays most interesting and challenging problems in big multidimensional data processing. They trace the history of MSL, detail recent advances, and explore future developments and emerging applications.

The book follows a unifying MSL framework formulation to systematically derive representative MSL algorithms. It describes various applications of the algorithms, along with their pseudocode. Implementation tips help practitioners in further development, evaluation, and application. The book also provides researchers with useful theoretical information on big multidimensional data in machine learning and pattern recognition. MATLAB® source code, data, and other materials are available at www.comp.hkbu.edu.hk/~haiping/MSL.html

Arvustused

"this book is built to be read as a rich and yet accessible introduction artfully structured for a specialized audience of new researchers and bleeding-edge practitioners. The treatment builds an overarching framework and provides an analytical reader with a well-expressed taxonomy on the foundations of historical developments and similarity in content and goals. Thus, packaged, current research is endowed with instant meaning and purpose, the derivation of which would initially elude a newcomer to this complex and articulated branch of machine learning." Computing Reviews, November 2014

"Experimentally inclined readers will probably like this book . Practitioners will appreciate that the presentation of the subject matter is goal oriented The structure that this book builds can allow a neophyte to avoid much of the initial confusion and wasted effort necessary to classify unfamiliar work and distinguish between what may be useful or not to ones intents and interests. an exquisitely enriched literature review that is almost good enough to use as an auxiliary graduate textbook a rich yet accessible introduction " Computing Reviews, October 2014

List of Figures xiii
List of Tables xvii
List of Algorithms xix
Acronyms and Symbols xxi
Preface xxv
1 Introduction
1(16)
1.1 Tensor Representation of Multidimensional Data
2(3)
1.2 Dimensionality Reduction via Subspace Learning
5(4)
1.3 Multilinear Mapping for Subspace Learning
9(2)
1.4 Roadmap
11(3)
1.5 Summary
14(3)
I Fundamentals and Foundations 17(88)
2 Linear Subspace Learning for Dimensionality Reduction
19(30)
2.1 Principal Component Analysis
20(4)
2.2 Independent Component Analysis
24(3)
2.3 Linear Discriminant Analysis
27(4)
2.4 Canonical Correlation Analysis
31(4)
2.5 Partial Least Squares Analysis
35(4)
2.6 Unified View of PCA, LDA, CCA, and PLS
39(1)
2.7 Regularization and Model Selection
40(3)
2.7.1 Regularizing Covariance Matrix Estimation
40(1)
2.7.2 Regularizing Model Complexity
41(1)
2.7.3 Model Selection
42(1)
2.8 Ensemble Learning
43(2)
2.8.1 Bagging
43(1)
2.8.2 Boosting
43(2)
2.9 Summary
45(1)
2.10 Further Reading
46(3)
3 Fundamentals of Multilinear Subspace Learning
49(22)
3.1 Multilinear Algebra Preliminaries
50(7)
3.1.1 Notations and Definitions
50(3)
3.1.2 Basic Operations
53(3)
3.1.3 Tensor/Matrix Distance Measure
56(1)
3.2 Tensor Decompositions
57(2)
3.2.1 CANDECOMP/PARAFAC
57(1)
3.2.2 Tucker Decomposition and HOSVD
58(1)
3.3 Multilinear Projections
59(4)
3.3.1 Vector-to-Vector Projection
59(2)
3.3.2 Tensor-to-Tensor Projection
61(1)
3.3.3 Tensor-to-Vector Projection
61(2)
3.4 Relationships among Multilinear Projections
63(1)
3.5 Scatter Measures for Tensors and Scalars
64(4)
3.5.1 Tensor-Based Scatters
64(3)
3.5.2 Scalar-Based Scatters
67(1)
3.6 Summary
68(1)
3.7 Further Reading
69(2)
4 Overview of Multilinear Subspace Learning
71(18)
4.1 Multilinear Subspace Learning Framework
72(2)
4.2 PCA-Based MSL Algorithms
74(2)
4.2.1 PCA-Based MSL through TTP
74(2)
4.2.2 PCA-Based MSL through TVP
76(1)
4.3 LDA-Based MSL Algorithms
76(2)
4.3.1 LDA-Based MSL through TTP
77(1)
4.3.2 LDA-Based MSL through TVP
77(1)
4.4 History and Related Works
78(3)
4.4.1 History of Tensor Decompositions
78(1)
4.4.2 Nonnegative Matrix and Tensor Factorizations
79(1)
4.4.3 Tensor Multiple Factor Analysis and Multilinear Graph-Embedding
80(1)
4.5 Future Research on MSL
81(5)
4.5.1 MSL Algorithm Development
81(3)
4.5.2 MSL Application Exploration
84(2)
4.6 Summary
86(1)
4.7 Further Reading
86(3)
5 Algorithmic and Computational Aspects
89(16)
5.1 Alternating Partial Projections for MSL
90(2)
5.2 Initialization
92(4)
5.2.1 Popular Initialization Methods
92(1)
5.2.2 Full Projection Truncation
93(1)
5.2.3 Interpretation of Mode-n Eigenvalues
94(1)
5.2.4 Analysis of Full Projection Truncation
95(1)
5.3 Projection Order, Termination, and Convergence
96(1)
5.4 Synthetic Data for Analysis of MSL Algorithms
97(2)
5.5 Feature Selection for TTP-Based MSL
99(2)
5.5.1 Supervised Feature Selection
100(1)
5.5.2 Unsupervised Feature Selection
101(1)
5.6 Computational Aspects
101(2)
5.6.1 Memory Requirements and Storage Needs
101(1)
5.6.2 Computational Complexity
102(1)
5.6.3 MATLAB® Implementation Tips for Large Datasets
102(1)
5.7 Summary
103(1)
5.8 Further Reading
104(1)
II Algorithms and Applications 105(100)
6 Multilinear Principal Component Analysis
107(34)
6.1 Generalized PCA
108(5)
6.1.1 GPCA Problem Formulation
108(1)
6.1.2 GPCA Algorithm Derivation
109(1)
6.1.3 Discussions on GPCA
110(2)
6.1.4 Reconstruction Error Minimization
112(1)
6.2 Multilinear PCA
113(7)
6.2.1 MPCA Problem Formulation
114(1)
6.2.2 MPCA Algorithm Derivation
114(2)
6.2.3 Discussions on MPCA
116(2)
6.2.4 Subspace Dimension Determination
118(2)
6.2.4.1 Sequential Mode Truncation
119(1)
6.2.4.2 Q-Based Method
119(1)
6.3 Tensor Rank-One Decomposition
120(4)
6.3.1 TROD Problem Formulation
120(1)
6.3.2 Greedy Approach for TROD
121(1)
6.3.3 Solving for the pth EMP
122(2)
6.4 Uncorrelated Multilinear PCA
124(7)
6.4.1 UMPCA Problem Formulation
124(1)
6.4.2 UMPCA Algorithm Derivation
125(5)
6.4.3 Discussions on UMPCA
130(1)
6.5 Boosting with MPCA
131(4)
6.5.1 Benefits of MPCA-Based Booster
132(1)
6.5.2 LDA-Style Boosting on MPCA Features
132(2)
6.5.3 Modified LDA Learner
134(1)
6.6 Other Multilinear PCA Extensions
135(6)
6.6.1 Two-Dimensional PCA
135(1)
6.6.2 Generalized Low Rank Approximation of Matrices
136(1)
6.6.3 Concurrent Subspace Analysis
136(1)
6.6.4 MPCA plus LDA
137(1)
6.6.5 Nonnegative MPCA
137(1)
6.6.6 Robust Versions of MPCA
137(1)
6.6.7 Incremental Extensions of MPCA
138(1)
6.6.8 Probabilistic Extensions of MPCA
138(1)
6.6.9 Weighted MPCA and MPCA for Binary Tensors
139(2)
7 Multilinear Discriminant Analysis
141(24)
7.1 Two-Dimensional LDA
142(3)
7.1.1 2DLDA Problem Formulation
142(1)
7.1.2 2DLDA Algorithm Derivation
143(2)
7.2 Discriminant Analysis with Tensor Representation
145(2)
7.2.1 DATER Problem Formulation
145(1)
7.2.2 DATER Algorithm Derivation
146(1)
7.3 General Tensor Discriminant Analysis
147(3)
7.4 Tensor Rank-One Discriminant Analysis
150(3)
7.4.1 TR1DA Problem Formulation
150(1)
7.4.2 Solving for the pth EMP
151(2)
7.5 Uncorrelated Multilinear Discriminant Analysis
153(9)
7.5.1 UMLDA Problem Formulation
153(1)
7.5.2 R-UMLDA Algorithm Derivation
154(6)
7.5.3 Aggregation of R-UMLDA Learners
160(2)
7.6 Other Multilinear Extensions of LDA
162(3)
7.6.1 Graph-Embedding for Dimensionality Reduction
162(1)
7.6.2 Graph-Embedding Extensions of Multilinear Discriminant Analysis
163(1)
7.6.3 Incremental and Sparse Multilinear Discriminant Analysis
164(1)
8 Multilinear ICA, CCA, and PLS
165(24)
8.1 Overview of Multilinear ICA Algorithms
166(1)
8.1.1 Multilinear Approaches for ICA on Vector-Valued Data
166(1)
8.1.2 Multilinear Approaches for ICA on Tensor-Valued Data
166(1)
8.2 Multilinear Modewise ICA
167(5)
8.2.1 Multilinear Mixing Model for Tensors
168(1)
8.2.2 Regularized Estimation of Mixing Tensor
168(1)
8.2.3 MMICA Algorithm Derivation
169(1)
8.2.4 Architectures and Discussions on MMICA
170(1)
8.2.5 Blind Source Separation on Synthetic Data
171(1)
8.3 Overview of Multilinear CCA Algorithms
172(1)
8.4 Two-Dimensional CCA
173(3)
8.4.1 2D-CCA Problem Formulation
173(1)
8.4.2 2D-CCA Algorithm Derivation
174(2)
8.5 Multilinear CCA
176(8)
8.5.1 MCCA Problem Formulation
176(2)
8.5.2 MCCA Algorithm Derivation
178(6)
8.5.3 Discussions on MCCA
184(1)
8.6 Multilinear PLS Algorithms
184(5)
8.6.1 N-Way PLS
184(1)
8.6.2 Higher-Order PLS
185(4)
9 Applications of Multilinear Subspace Learning
189(16)
9.1 Pattern Recognition System
190(1)
9.2 Face Recognition
191(5)
9.2.1 Algorithms and Their Settings
192(1)
9.2.2 Recognition Results for Supervised Learning Algorithms
193(1)
9.2.3 Recognition Results for Unsupervised Learning Algorithms
194(2)
9.3 Gait Recognition
196(2)
9.4 Visual Content Analysis in Computer Vision
198(2)
9.4.1 Crowd Event Visualization and Clustering
198(1)
9.4.2 Target Tracking in Video
199(1)
9.4.3 Action, Scene, and Object Recognition
199(1)
9.5 Brain Signal/Image Processing in Neuroscience
200(2)
9.5.1 EEG Signal Analysis
200(1)
9.5.2 fMRI Image Analysis
201(1)
9.6 DNA Sequence Discovery in Bioinformatics
202(1)
9.7 Music Genre Classification in Audio Signal Processing
202(1)
9.8 Data Stream Monitoring in Data Mining
203(1)
9.9 Other MSL Applications
204(1)
Appendix A Mathematical Background 205(14)
A.1 Linear Algebra Preliminaries
205(8)
A.1.1 Transpose
205(1)
A.1.2 Identity and Inverse Matrices
206(1)
A.1.3 Linear Independence and Vector Space Basis
206(1)
A.1.4 Products of Vectors and Matrices
207(2)
A.1.5 Vector and Matrix Norms
209(1)
A.1.6 Trace
209(1)
A.1.7 Determinant
210(1)
A.1.8 Eigenvalues and Eigenvectors
211(1)
A.1.9 Generalized Eigenvalues and Eigenvectors
212(1)
A.1.10 Singular Value Decomposition
212(1)
A.1.11 Power Method for Eigenvalue Computation
213(1)
A.2 Basic Probability Theory
213(2)
A.2.1 One Random Variable
213(1)
A.2.2 Two Random Variables
214(1)
A.3 Basic Constrained Optimization
215(1)
A.4 Basic Matrix Calculus
215(4)
A.4.1 Basic Derivative Rules
215(1)
A.4.2 Derivative of Scalar/Vector with Respect to Vector
216(1)
A.4.3 Derivative of Trace with Respect to Matrix
216(1)
A.4.4 Derivative of Determinant with Respect to Matrix
217(2)
Appendix B Data and Preprocessing 219(8)
B.1 Face Databases and Preprocessing
219(3)
B.1.1 PIE Database
219(1)
B.1.2 FERET Database
220(1)
B.1.3 Preprocessing of Face Images for Recognition
220(2)
B.2 Gait Database and Preprocessing
222(5)
B.2.1 USF Gait Challenge Database
222(2)
B.2.2 Gait Silhouette Extraction
224(1)
B.2.3 Normalization of Gait Samples
224(3)
Appendix C Software 227(4)
C.1 Software for Multilinear Subspace Learning
227(1)
C.2 Benefits of Open-Source Software
228(1)
C.3 Software Development Tips
228(3)
Bibliography 231(32)
Index 263
Haiping Lu, Konstantinos N. Plataniotis, Anastasios Venetsanopoulos