Muutke küpsiste eelistusi

Robust Representation for Data Analytics: Models and Applications 1st ed. 2017 [Kõva köide]

  • Formaat: Hardback, 224 pages, kõrgus x laius: 235x155 mm, kaal: 5221 g, 49 Illustrations, color; 3 Illustrations, black and white; XI, 224 p. 52 illus., 49 illus. in color., 1 Hardback
  • Sari: Advanced Information and Knowledge Processing
  • Ilmumisaeg: 29-Aug-2017
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 331960175X
  • ISBN-13: 9783319601755
Teised raamatud teemal:
  • Kõva köide
  • Hind: 113,55 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 133,59 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Hardback, 224 pages, kõrgus x laius: 235x155 mm, kaal: 5221 g, 49 Illustrations, color; 3 Illustrations, black and white; XI, 224 p. 52 illus., 49 illus. in color., 1 Hardback
  • Sari: Advanced Information and Knowledge Processing
  • Ilmumisaeg: 29-Aug-2017
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 331960175X
  • ISBN-13: 9783319601755
Teised raamatud teemal:
This book introduces the concepts and models of robust representation learning, and provides a set of solutions to deal with real-world data analytics tasks, such as clustering, classification, time series modeling, outlier detection, collaborative filtering, community detection, etc. Three types of robust feature representations are developed, which extend the understanding of graph, subspace, and dictionary.Leveraging the theory of low-rank and sparse modeling, the authors develop robust feature representations under various learning paradigms, including unsupervised learning, supervised learning, semi-supervised learning, multi-view learning, transfer learning, and deep learning. Robust Representations for Data Analytics covers a wide range of applications in the research fields of big data, human-centered computing, pattern recognition, digital marketing, web mining, and computer vision.

Introduction.- Fundamentals of Robust Representations.- Part 1: Robust Representation Models.- Robust Graph Construction.- Robust Subspace Learning.- Robust Multi-View Subspace Learning.- Part 11: Applications.- Robust Representations for Collaborative Filtering.- Robust Representations for Response Prediction.- Robust Representations for Outlier Detection.- Robust Representations for Person Re-Identification.- Robust Representations for Community Detection.- Index.
1 Introduction
1(8)
1.1 What Are Robust Data Representations?
2(1)
1.2 Organization of the Book
3(6)
Part I Robust Representation Models
2 Fundamentals of Robust Representations
9(8)
2.1 Representation Learning Models
9(2)
2.1.1 Subspace Learning
9(1)
2.1.2 Multi-view Subspace Learning
10(1)
2.1.3 Dictionary Learning
11(1)
2.2 Robust Representation Learning
11(6)
2.2.1 Subspace Clustering
12(1)
2.2.2 Low-Rank Modeling
12(1)
References
13(4)
3 Robust Graph Construction
17(28)
3.1 Overview
17(3)
3.2 Existing Graph Construction Methods
20(2)
3.2.1 Unbalanced Graphs and Balanced Graph
20(1)
3.2.2 Sparse Representation Based Graphs
21(1)
3.2.3 Low-Rank Learning Based Graphs
21(1)
3.3 Low-Rank Coding Based Unbalanced Graph Construction
22(6)
3.3.1 Motivation
22(1)
3.3.2 Problem Formulation
23(2)
3.3.3 Optimization
25(2)
3.3.4 Complexity Analysis
27(1)
3.3.5 Discussions
28(1)
3.4 Low-Rank Coding Based Balanced Graph Construction
28(1)
3.4.1 Motivation and Formulation
28(1)
3.4.2 Optimization
29(1)
3.5 Learning with Graphs
29(2)
3.5.1 Graph Based Clustering
30(1)
3.5.2 Transductive Semi-supervised Classification
30(1)
3.5.3 Inductive Semi-supervised Classification
31(1)
3.6 Experiments
31(10)
3.6.1 Databases and Settings
32(1)
3.6.2 Spectral Clustering with Graph
33(2)
3.6.3 Semi-supervised Classification with Graph
35(3)
3.6.4 Discussions
38(3)
3.7 Summary
41(4)
References
41(4)
4 Robust Subspace Learning
45(28)
4.1 Overview
45(4)
4.2 Supervised Regularization Based Robust Subspace (SRRS)
49(8)
4.2.1 Problem Formulation
49(3)
4.2.2 Theoretical Analysis
52(1)
4.2.3 Optimization
53(2)
4.2.4 Algorithm and Discussions
55(2)
4.3 Experiments
57(12)
4.3.1 Object Recognition with Pixel Corruption
57(6)
4.3.2 Face Recognition with Illumination and Pose Variation
63(2)
4.3.3 Face Recognition with Occlusions
65(1)
4.3.4 Kinship Verification
66(1)
4.3.5 Discussions
67(2)
4.4 Summary
69(4)
References
69(4)
5 Robust Multi-view Subspace Learning
73(22)
5.1 Overview
73(3)
5.2 Problem Definition
76(1)
5.3 Multi-view Discriminative Bilinear Projection (MDBP)
77(7)
5.3.1 Motivation
78(1)
5.3.2 Formulation of MDBP
78(3)
5.3.3 Optimization Algorithm
81(2)
5.3.4 Comparison with Existing Methods
83(1)
5.4 Experiments
84(7)
5.4.1 UCI Daily and Sports Activity Dataset
84(3)
5.4.2 Multimodal Spoken Word Dataset
87(1)
5.4.3 Discussions
88(3)
5.5 Summary
91(4)
References
91(4)
6 Robust Dictionary Learning
95(28)
6.1 Overview
95(4)
6.2 Self-Taught Low-Rank (S-Low) Coding
99(7)
6.2.1 Motivation
99(1)
6.2.2 Problem Formulation
100(2)
6.2.3 Optimization
102(2)
6.2.4 Algorithm and Discussions
104(2)
6.3 Learning with S-Low Coding
106(1)
6.3.1 S-Low Clustering
106(1)
6.3.2 S-Low Classification
107(1)
6.4 Experiments
107(9)
6.4.1 Datasets and Settings
107(3)
6.4.2 Property Analysis
110(2)
6.4.3 Clustering Results
112(1)
6.4.4 Classification Results
113(3)
6.4.5 Discussions
116(1)
6.5 Summary
116(7)
References
117(6)
Part II Applications
7 Robust Representations for Collaborative Filtering
123(24)
7.1 Overview
123(2)
7.2 Collaborative Filtering
125(2)
7.2.1 Matrix Factorization for Collaborative Filtering
125(1)
7.2.2 Deep Learning for Collaborative Filtering
126(1)
7.3 Preliminaries
127(2)
7.3.1 Matrix Factorization
127(1)
7.3.2 Marginalized Denoising Auto-encoder (mDA)
128(1)
7.4 Our Approach
129(7)
7.4.1 Deep Collaborative Filtering (DCF): A General Framework
130(1)
7.4.2 DCF Using PMF + mDA
131(4)
7.4.3 Discussion
135(1)
7.5 Experiments
136(8)
7.5.1 Movie Recommendation
137(3)
7.5.2 Book Recommendation
140(1)
7.5.3 Response Prediction
141(2)
7.5.4 Discussion
143(1)
7.6 Summary
144(3)
References
145(2)
8 Robust Representations for Response Prediction
147(28)
8.1 Overview
147(2)
8.2 Response Prediction
149(2)
8.2.1 Prediction Models with Temporal Dynamics
150(1)
8.2.2 Prediction Models with Side Information
151(1)
8.3 Preliminaries
151(2)
8.3.1 Notations
152(1)
8.3.2 Problem Definition
152(1)
8.4 Dynamic Collective Matrix Factorization (DCMF) with Side Information
153(6)
8.4.1 CMF for Conversion Prediction
153(2)
8.4.2 Modeling Temporal Dynamics
155(2)
8.4.3 Modeling Side Information
157(1)
8.4.4 Discussions
157(2)
8.5 Optimization
159(3)
8.5.1 Algorithm
159(2)
8.5.2 Discussions
161(1)
8.6 Experiments
162(8)
8.6.1 Experiments on Public Data
162(2)
8.6.2 Conversion Prediction: Settings
164(2)
8.6.3 Conversion Prediction: Results and Discussions
166(2)
8.6.4 Effectiveness Measurement of Ads
168(1)
8.6.5 Discussions
168(2)
8.7 Summary
170(5)
References
171(4)
9 Robust Representations for Outlier Detection
175(28)
9.1 Overview
175(4)
9.2 Preliminary
179(2)
9.2.1 Outlier Detection
179(1)
9.2.2 Multi-view Outliers
180(1)
9.3 Multi-view Low-Rank Analysis (MLRA)
181(5)
9.3.1 Cross-View Low-Rank Analysis
181(4)
9.3.2 Outlier Score Estimation
185(1)
9.4 MLRA for Multi-view Group Outlier Detection
186(3)
9.4.1 Motivation
187(1)
9.4.2 Formulation and Algorithm
187(2)
9.5 Experiments
189(7)
9.5.1 Baselines and Evaluation Metrics
189(1)
9.5.2 Synthetic Multi-view Settings on Real Data
190(4)
9.5.3 Real-World Multi-view Data with Synthetic Outliers
194(1)
9.5.4 Real-World Multi-view Data with Real Outliers
195(1)
9.5.5 Group Outlier Detection
195(1)
9.5.6 Discussions
196(1)
9.6 Summary
196(7)
References
199(4)
10 Robust Representations for Person Re-identification
203(20)
10.1 Overview
203(2)
10.2 Person Re-identification
205(1)
10.3 Cross-View Projective Dictionary Learning (CPDL)
205(2)
10.3.1 Motivation
205(1)
10.3.2 Formulation of CPDL
206(1)
10.4 CPDL for Person Re-identification
207(4)
10.4.1 Feature Extraction
207(1)
10.4.2 CPDL for Image Representation
208(1)
10.4.3 CPDL for Patch Representation
209(1)
10.4.4 Matching and Fusion
210(1)
10.5 Optimization
211(2)
10.5.1 Optimizing Image-Level Representations
211(1)
10.5.2 Optimizing Patch-Level Representations
212(1)
10.6 Experiments
213(7)
10.6.1 Settings
214(1)
10.6.2 VIPeR Dataset
214(1)
10.6.3 CUHK01 Campus Dataset
215(2)
10.6.4 GRID Dataset
217(1)
10.6.5 Discussions
218(2)
10.7 Summary
220(3)
References
220(3)
Index 223