Muutke küpsiste eelistusi

E-raamat: Machine Learning in Computer Vision

Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 87,07 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

It started withimageprocessing inthesixties. Back then, it took ages to digitize a Landsat image and then process it with a mainframe computer. P- cessing was inspired on theachievements of signal processing and was still very much oriented towards programming. In the seventies, image analysis spun off combining image measurement with statistical pattern recognition. Slowly, computational methods detached themselves from the sensor and the goal to become more generally applicable. In theeighties, model-drivencomputervision originated when arti cial- telligence and geometric modelling came together with image analysis com- nents. The emphasis was on precise analysiswithlittleorno interaction, still very much an art evaluated by visual appeal. The main bottleneck was in the amount of data using an average of 5 to 50 pictures to illustrate the point. At the beginning of the nineties, vision became available to many with the advent of suf ciently fast PCs. The Internet revealed the interest of the g- eral public im images, eventually introducingcontent-basedimageretrieval. Combining independent (informal) archives, as the web is, urges for inter- tive evaluation of approximate results andhence weak algorithms and their combination in weak classi ers.

Arvustused

"This book comes right on time [ ...] It is amazing so early in a new field that a book appears which connects theory to algorithms and through them to convincing applications. [ ...] This book will surely be with us for quite some time to come." From the foreword by Arnold Smeulders

Foreword xi
Preface xiii
1. INTRODUCTION
1(14)
1 Research Issues on Learning in Computer Vision
2(4)
2 Overview of the Book
6(6)
3 Contributions
12(3)
2. THEORY: PROBABILISTIC CLASSIFIERS
15(30)
1 Introduction
15(3)
2 Preliminaries and Notations
18(2)
2.1 Maximum Likelihood Classification
18(1)
2.2 Information Theory
19(1)
2.3 Inequalities
20(1)
3 Bayes Optimal Error and Entropy
20(7)
4 Analysis of Classification Error of Estimated (Mismatched) Distribution
27(4)
4.1 Hypothesis Testing Framework
28(2)
4.2 Classification Framework
30(1)
5 Density of Distributions
31(9)
5.1 Distributional Density
33(4)
5.2 Relating to Classification Error
37(3)
6 Complex Probabilistic Models and Small Sample Effects
40(1)
7 Summary
41(4)
3. THEORY: GENERALIZATION BOUNDS
45(20)
1 Introduction
45(2)
2 Preliminaries
47(2)
3 A Margin Distribution Based Bound
49(8)
3.1 Proving the Margin Distribution Bound
49(8)
4 Analysis
57(7)
4.1 Comparison with Existing Bounds
59(5)
5 Summary
64(1)
4. THEORY: SEMI-SUPERVISED LEARNING
65(38)
1 Introduction
65(2)
2 Properties of Classification
67(1)
3 Existing Literature
68(2)
4 Semi-supervised Learning Using Maximum Likelihood Estimation
70(3)
5 Asymptotic Properties of Maximum Likelihood Estimation with Labeled and Unlabeled Data
73(17)
5.1 Model Is Correct
76(1)
5.2 Model Is Incorrect
77(3)
5.3 Examples: Unlabeled Data Degrading Performance with Discrete and Continuous Variables
80(3)
5.4 Generating Examples: Performance Degradation with Univariate Distributions
83(3)
5.5 Distribution of Asymptotic Classification Error Bias
86(2)
5.6 Short Summary
88(2)
6 Learning with Finite Data
90(10)
6.1 Experiments with Artificial Data
91(1)
6.2 Can Unlabeled Data Help with Incorrect Models? Bias vs. Variance Effects and the Labeled-unlabeled Graphs
92(5)
6.3 Detecting When Unlabeled Data Do Not Change the Estimates
97(2)
6.4 Using Unlabeled Data to Detect Incorrect Modeling Assumptions
99(1)
7 Concluding Remarks
100(3)
5. ALGORITHM: MAXIMUM LIKELIHOOD MINIMUM ENTROPY HMM
103(16)
1 Previous Work
103(2)
2 Mutual Information, Bayes Optimal Error, Entropy, and Conditional Probability
105(2)
3 Maximum Mutual Information HMMs
107(4)
3.1 Discrete Maximum Mutual Information HMMs
108(2)
3.2 Continuous Maximum Mutual Information HMMs
110(1)
3.3 Unsupervised Case
111(1)
4 Discussion
111(4)
4.1 Convexity
111(1)
4.2 Convergence
112(1)
4.3 Maximum A-posteriori View of Maximum Mutual Information HMMs
112(3)
5 Experimental Results
115(2)
5.1 Synthetic Discrete Supervised Data
115(1)
5.2 Speaker Detection
115(2)
5.3 Protein Data
117(1)
5.4 Real-time Emotion Data
117(1)
6 Summary
117(2)
6. ALGORITHM: MARGIN DISTRIBUTION OPTIMIZATION
119(10)
1 Introduction
119(1)
2 A Margin Distribution Based Bound
120(1)
3 Existing Learning Algorithms
121(4)
4 The Margin Distribution Optimization (MDO) Algorithm
125(2)
4.1 Comparison with SVM and Boosting
126(1)
4.2 Computational Issues
126(1)
5 Experimental Evaluation
127(1)
6 Conclusions
128(1)
7. ALGORITHM: LEARNING THE STRUCTURE OF BAYESIAN NETWORK CLASSIFIERS
129(28)
1 Introduction
129(1)
2 Bayesian Network Classifiers
130(8)
2.1 Naive Bayes Classifiers
132(1)
2.2 Tree-Augmented Naive Bayes Classifiers
133(5)
3 Switching between Models: Naive Bayes and TAN Classifiers
138(2)
4 Learning the Structure of Bayesian Network Classifiers: Existing Approaches
140(3)
4.1 Independence-based Methods
140(2)
4.2 Likelihood and Bayesian Score-based Methods
142(1)
5 Classification Driven Stochastic Structure Search
143(3)
5.1 Stochastic Structure Search Algorithm
143(2)
5.2 Adding VC Bound Factor to the Empirical Error Measure
145(1)
6 Experiments
146(4)
6.1 Results with Labeled Data
146(1)
6.2 Results with Labeled and Unlabeled Data
147(3)
7 Should Unlabeled Data Be Weighed Differently?
150(1)
8 Active Learning
151(2)
9 Concluding Remarks
153(4)
8. APPLICATION: OFFICE ACTIVITY RECOGNITION
157(18)
1 Context-Sensitive Systems
157(2)
2 Towards Tractable and Robust Context Sensing
159(1)
3 Layered Hidden Markov Models (LHMMs)
160(4)
3.1 Approaches
161(1)
3.2 Decomposition per Temporal Granularity
162(2)
4 Implementation of SEER
164(2)
4.1 Feature Extraction and Selection in SEER
164(1)
4.2 Architecture of SEER
165(1)
4.3 Learning in SEER
166(1)
4.4 Classification in SEER
166(1)
5 Experiments
166(4)
5.1 Discussion
169(1)
6 Related Representations
170(2)
7 Summary
172(3)
9. APPLICATION: MULTIMODAL EVENT DETECTION
175(12)
1 Fusion Models: A Review
176(1)
2 A Hierarchical Fusion Model
177(5)
2.1 Working of the Model
178(1)
2.2 The Duration Dependent Input Output Markov Model
179(3)
3 Experimental Setup, Features, and Results
182(1)
4 Summary
183(4)
10. APPLICATION: FACIAL EXPRESSION RECOGNITION 187(24)
1 Introduction
187(2)
2 Human Emotion Research
189(8)
2.1 Affective Human-computer Interaction
189(1)
2.2 Theories of Emotion
190(2)
2.3 Facial Expression Recognition Studies
192(5)
3 Facial Expression Recognition System
197(4)
3.1 Face Tracking and Feature Extraction
197(3)
3.2 Bayesian Network Classifiers: Learning the "Structure" of the Facial Features
200(1)
4 Experimental Analysis
201(7)
4.1 Experimental Results with Labeled Data
204(3)
4.1.1 Person-dependent Tests
205(1)
4.1.2 Person-independent Tests
206(1)
4.2 Experiments with Labeled and Unlabeled Data
207(1)
5 Discussion
208(3)
11. APPLICATION: BAYESIAN NETWORK CLASSIFIERS FOR FACE DETECTION 211(14)
1 Introduction
211(2)
2 Related Work
213(4)
3 Applying Bayesian Network Classifiers to Face Detection
217(1)
4 Experiments
218(4)
5 Discussion
222(3)
References 225(12)
Index 237