Muutke küpsiste eelistusi

E-raamat: Transfer Learning through Embedding Spaces [Taylor & Francis e-raamat]

  • Formaat: 198 pages, 10 Tables, black and white; 40 Line drawings, black and white; 40 Illustrations, black and white
  • Ilmumisaeg: 29-Jun-2021
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-13: 9781003146032
  • Taylor & Francis e-raamat
  • Hind: 166,18 €*
  • * hind, mis tagab piiramatu üheaegsete kasutajate arvuga ligipääsu piiramatuks ajaks
  • Tavahind: 237,40 €
  • Säästad 30%
  • Formaat: 198 pages, 10 Tables, black and white; 40 Line drawings, black and white; 40 Illustrations, black and white
  • Ilmumisaeg: 29-Jun-2021
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-13: 9781003146032
"In this book, we provide a brief background on transfer learning and then focus on the idea of transferring knowledge through intermediate embedding spaces. The idea is to couple and relate different learning through embedding spaces that encode task-level relations and similarities. We cover various machine learning scenarios and demonstrate that this idea can be used to overcome challenges of zero-shot learning, few-shot learning, domain adaptation, continual learning, lifelong learning, and collaborative learning"--

Recent progress in artificial intelligence (AI) has revolutionized our everyday life. Many AI algorithms have reached human-level performance and AI agents are replacing humans in most professions. It is predicted that this trend will continue and 30% of work activities in 60% of current occupations will be automated.

This success, however, is conditioned on availability of huge annotated datasets to training AI models. Data annotation is a time-consuming and expensive task which still is being performed by human workers. Learning efficiently from less data is a next step for making AI more similar to natural intelligence. Transfer learning has been suggested a remedy to relax the need for data annotation. The core idea in transfer learning is to transfer knowledge across similar tasks and use similarities and previously learned knowledge to learn more efficiently.

In this book, we provide a brief background on transfer learning and then focus on the idea of transferring knowledge through intermediate embedding spaces. The idea is to couple and relate different learning through embedding spaces that encode task-level relations and similarities. We cover various machine learning scenarios and demonstrate that this idea can be used to overcome challenges of zero-shot learning, few-shot learning, domain adaptation, continual learning, lifelong learning, and collaborative learning. 

List of Figures
xiii
List of Tables
xvii
Preface xix
Acknowledgments xxi
Chapter 1 Introduction
1(10)
1.1 Knowledge Transfer Through Embedding Space
3(2)
1.2 Structure and Organization of the Book
5(6)
1.2.1 Cross-Domain Knowledge Transfer
5(2)
1.2.2 Cross-Task Knowledge Transfer
7(1)
1.2.3 Cross-Agent Knowledge Transfer
8(1)
1.2.4 Book Organization
8(3)
Chapter 2 Background and Related Work
11(16)
2.1 Knowledge Transfer Through Shared Representation Spaces
13(2)
2.2 Cross-Domain Knowledge Transfer
15(3)
2.2.1 Zero-Shot Learning
15(1)
2.2.2 Domain Adaptation
16(2)
2.3 Cross-Task Knowledge Transfer
18(3)
2.3.1 Multi-Task Learning
18(2)
2.3.2 Lifelong Learning
20(1)
2.4 Cross-Agent Knowledge Transfer
21(1)
2.5 CONCLUSIONS
22(5)
Section I Cross-Domain Knowledge Transfer
Chapter 3 Zero-Shot Image Classification through Coupled Visual and Semantic Embedding Spaces
27(20)
3.1 Overview
28(2)
3.2 Problem Formulation And Technical Rationale
30(3)
3.2.1 Proposed Idea
31(2)
3.2.2 Technical Rationale
33(1)
3.3 Zero-Shot Learning Using Coupled Dictionary Learning
33(5)
3.3.1 Training Phase
34(1)
3.3.2 Prediction of Unseen Attributes
35(1)
3.3.2.1 Attribute-Agnostic Prediction
35(1)
3.3.2.2 Attribute-Aware Prediction
36(1)
3.3.3 From Predicted Attributes to Labels
37(1)
3.3.3.1 Inductive Approach
37(1)
3.3.3.2 Transductive Learning
37(1)
3.4 Theoretical Discussion
38(2)
3.5 Experiments
40(5)
3.6 Conclusions
45(2)
Chapter 4 Learning a Discriminative Embedding for Unsupervised Domain Adaptation
47(18)
4.1 Introduction
49(1)
4.2 Related Work
50(1)
4.2.1 Semantic Segmentation
50(1)
4.2.2 Domain Adaptation
50(1)
4.3 Problem Formulation
51(1)
4.4 Proposed Algorithm
52(3)
4.5 Theoretical Analysis
55(3)
4.6 Experimental Validation
58(5)
4.6.1 Experimental Setup
58(1)
4.6.2 Results
58(4)
4.6.3 Ablation Study
62(1)
4.7 Conclusions
63(2)
Chapter 5 Few-Shot Image Classification through Coupled Embedding Spaces
65(24)
5.1 Overview
66(3)
5.2 Related Work
69(1)
5.3 Problem Formulation And Rationale
70(3)
5.4 Proposed Solution
73(1)
5.5 Theoretical Analysis
74(3)
5.6 Experimental Validation
77(6)
5.6.1 Ship Detection in SAR Domain
77(1)
5.6.2 Methodology
78(1)
5.6.3 Results
79(4)
5.7 Conclusions
83(6)
Section II Cross-Task Knowledge Transfer
Chapter 6 Lifelong Zero-Shot Learning Using High-Level Task Descriptors
89(30)
6.1 Overview
90(2)
6.2 Related Work
92(2)
6.3 Background
94(4)
6.3.1 Supervised Learning
94(1)
6.3.2 Reinforcement Learning
94(1)
6.3.3 Lifelong Machine Learning
95(3)
6.4 Lifelong Learning with Task Descriptors
98(5)
6.4.1 Task Descriptors
98(1)
6.4.2 Coupled Dictionary Optimization
99(3)
6.4.3 Zero-Shot transfer learning
102(1)
6.5 Theoretical Analysis
103(3)
6.5.1 Algorithm PAC-learnability
103(2)
6.5.2 Theoretical Convergence of TaDeLL
105(1)
6.5.3 Computational Complexity
106(1)
6.6 Evaluation On Reinforcement Learning Domains
106(4)
6.6.1 Benchmark Dynamical Systems
106(1)
6.6.2 Methodology
107(1)
6.6.3 Results on Benchmark Systems
108(1)
6.6.4 Application to Quadrotor Control
109(1)
6.7 Evaluation On Supervised Learning Domains
110(3)
6.7.1 Predicting the Location of a Robot end-effector
110(1)
6.7.2 Experiments on Synthetic Classification Domains
111(2)
6.8 Additional Experiments
113(3)
6.8.1 Choice of Task Descriptor Features
114(1)
6.8.2 Computational Efficiency
114(1)
6.8.3 Performance for Various Numbers of Tasks
115(1)
6.9 Conclusions
116(3)
Chapter 7 Complementary Learning Systems Theory for Tackling Catastrophic Forgetting
119(14)
7.1 Overview
121(1)
7.2 Related Work
122(1)
7.2.1 Model Consolidation
122(1)
7.2.2 Experience Replay
122(1)
7.3 Generative Continual Learning
123(2)
7.4 Optimization Method
125(1)
7.5 Theoretical Justification
126(2)
7.6 Experimental Validation
128(3)
7.6.1 Learning Sequential Independent Tasks
128(3)
7.6.2 Learning Sequential Tasks in Related Domains
131(1)
7.7 Conclusions
131(2)
Chapter 8 Continual Concept Learning
133(18)
8.1 Overview
134(1)
8.2 Related Work
135(1)
8.3 Problem Statement and the Proposed Solution
136(2)
8.4 Proposed Algorithm
138(2)
8.5 Theoretical Analysis
140(2)
8.6 Experimental Validation
142(3)
8.6.1 Learning Permuted MNIST Tasks
142(3)
8.6.2 Learning Sequential Digit Recognition Tasks
145(1)
8.7 Conclusions
145(6)
Section III Cross-Agent Knowledge Transfer
Chapter 9 Collective Lifelong Learning for Multi-Agent Networks
151(16)
9.1 Overview
151(3)
9.2 Lifelong Machine Learning
154(2)
9.3 Multi-Agent Lifelong Learning
156(4)
9.3.1 Dictionary Update Rule
158(2)
9.4 Theoretical Guarantees
160(3)
9.5 Experimental Results
163(3)
9.5.1 Datasets
163(1)
9.5.2 Evaluation Methodology
164(1)
9.5.3 Results
165(1)
9.6 Conclusions
166(1)
Chapter 10 Concluding Remarks and Potential Future Research Directions
167(6)
10.1 Summary and Discussions
167(3)
10.2 Future Research Directions
170(3)
Bibliography 173(24)
Index 197
Mohammad Rostami is a computer scientist at USC Information Sciences Institute. He is a graduate of the University of Pennsylvania, University of Waterloo, and Sharif University of Technology. His research area includes continual machine learning and learning in data scarce regimes.