Muutke küpsiste eelistusi

E-raamat: Semisupervised Learning for Computational Linguistics

Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 80,59 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The rapid advancement in the theoretical understanding of statistical and machine learning methods for semisupervised learning has made it difficult for nonspecialists to keep up to date in the field. Providing a broad, accessible treatment of the theory as well as linguistic applications, Semisupervised Learning for Computational Linguistics offers self-contained coverage of semisupervised methods that includes background material on supervised and unsupervised learning.

The book presents a brief history of semisupervised learning and its place in the spectrum of learning methods before moving on to discuss well-known natural language processing methods, such as self-training and co-training. It then centers on machine learning techniques, including the boundary-oriented methods of perceptrons, boosting, support vector machines (SVMs), and the null-category noise model. In addition, the book covers clustering, the expectation-maximization (EM) algorithm, related generative methods, and agreement methods. It concludes with the graph-based method of label propagation as well as a detailed discussion of spectral methods.

Taking an intuitive approach to the material, this lucid book facilitates the application of semisupervised learning methods to natural language processing and provides the framework and motivation for a more systematic study of machine learning.
Introduction
1(12)
A brief history
1(3)
Probabilistic methods in computational linguistics
1(1)
Supervised and unsupervised training
2(1)
Semisupervised learning
3(1)
Semisupervised learning
4(4)
Major varieties of learning problem
4(2)
Motivation
6(1)
Evaluation
7(1)
Active learning
8(1)
Organization and assumptions
8(5)
Leading ideas
8(2)
Mathematical background
10(1)
Notation
11(2)
Self-training and Co-training
13(18)
Classification
13(5)
The standard setting
13(1)
Features and rules
14(2)
Decision lists
16(2)
Self-training
18(10)
The algorithm
19(1)
Parameters and variants
20(3)
Evaluation
23(2)
Symmetry of features and instances
25(2)
Related algorithms
27(1)
Co-Training
28(3)
Applications of Self-Training and Co-Training
31(12)
Part-of-speech tagging
31(2)
Information extraction
33(2)
Parsing
35(1)
Word senses
36(7)
WordNet
36(2)
Word-sense disambiguation
38(2)
Taxonomic inference
40(3)
Classification
43(24)
Two simple classifiers
43(5)
Naive Bayes
43(2)
k-nearest-neighbor classifier
45(3)
Abstract setting
48(5)
Function approximation
48(2)
Defining success
50(2)
Fit and simplicity
52(1)
Evaluating detectors and classifiers that abstain
53(9)
Confidence-rated classifiers
53(1)
Measures for detection
54(3)
Idealized performance curves
57(2)
The multiclass case
59(3)
Binary classifiers and ECOC
62(5)
Mathematics for Boundary-Oriented Methods
67(28)
Linear separators
67(7)
Representing a hyperplane
67(2)
Eliminating the threshold
69(1)
The point-normal form
70(2)
Naive Bayes decision boundary
72(2)
The gradient
74(9)
Graphs and domains
74(2)
Convexity
76(3)
Differentiation of vector and matrix expressions
79(2)
An example: linear regression
81(2)
Constrained optimization
83(12)
Optimization
83(1)
Equality constraints
84(3)
Inequality constraints
87(4)
The Wolfe dual
91(4)
Boundary-Oriented Methods
95(36)
The perceptron
97(6)
The algorithm
97(2)
An example
99(1)
Convergence
100(1)
The perceptron algorithm as gradient descent
101(2)
Game self-teaching
103(2)
Boosting
105(9)
Abstention
110(1)
Semisupervised boosting
111(2)
Co-boosting
113(1)
Support Vector Machines (SVMs)
114(15)
The margin
114(2)
Maximizing the margin
116(3)
The nonseparable case
119(2)
Slack in the separable case
121(2)
Multiple slack points
123(2)
Transductive SVMs
125(2)
Training a transductive SVM
127(2)
Null-category noise model
129(2)
Clustering
131(22)
Cluster and label
131(1)
Clustering concepts
132(5)
Objective
132(1)
Distance and similarity
133(3)
Graphs
136(1)
Hierarchical clustering
137(2)
Self-training revisited
139(4)
k-means clustering
139(1)
Pseudo relevance feedback
140(3)
Graph mincut
143(3)
Label propagation
146(6)
Clustering by propagation
146(1)
Self-training as propagation
147(3)
Co-training as propagation
150(2)
Bibliographic notes
152(1)
Generative Models
153(22)
Gaussian mixtures
153(10)
Definition and geometric interpretation
153(3)
The linear discriminant decision boundary
156(3)
Decision-directed approximation
159(3)
McLachlan's algorithm
162(1)
The EM algorithm
163(12)
Maximizing likelihood
163(1)
Relative frequency estimation
164(2)
Divergence
166(3)
The EM algorithm
169(6)
Agreement Constraints
175(18)
Co-training
175(7)
The conditional independence assumption
176(2)
The power of conditional independence
178(4)
Agreement-based self-teaching
182(2)
Random fields
184(8)
Applied to self-training and co-training
184(2)
Gibbs sampling
186(1)
Markov chains and random walks
187(5)
Bibliographic notes
192(1)
Propagation Methods
193(28)
Label propagation
194(2)
Random walks
196(2)
Harmonic functions
198(5)
Fluids
203(10)
Flow
203(2)
Pressure
205(4)
Conservation of energy
209(1)
Thomson's principle
210(3)
Computing the solution
213(2)
Graph mincuts revisited
215(5)
Bibliographic notes
220(1)
Mathematics for Spectral Methods
221(16)
Some basic concepts
221(3)
The norm of a vector
221(1)
Matrices as linear operators
222(1)
The column space
222(2)
Eigenvalues and eigenvectors
224(3)
Definition of eigenvalues and eigenvectors
224(1)
Diagonalization
225(1)
Orthogonal diagonalization
226(1)
Eigenvalues and the scaling effects of a matrix
227(9)
Matrix norms
227(1)
The Rayleigh quotient
228(2)
The 2 x 2 case
230(2)
The general case
232(2)
The Courant-Fischer minimax theorem
234(2)
Bibliographic notes
236(1)
Spectral Methods
237(40)
Simple harmonic motion
237(14)
Harmonics
237(2)
Mixtures of harmonics
239(2)
An oscillating particle
241(2)
A vibrating string
243(8)
Spectra of matrices and graphs
251(6)
The spectrum of a matrix
252(1)
Relating matrices and graphs
253(3)
The Laplacian matrix and graph spectrum
256(1)
Spectral clustering
257(8)
The second smallest eigenvector of the Laplacian
257(2)
The cut size and the Laplacian
259(1)
Approximating cut size
260(2)
Minimizing cut size
262(1)
Ratiocut
263(2)
Spectral methods for semisupervised learning
265(10)
Harmonics and harmonic functions
265(2)
Eigenvalues and energy
267(1)
The Laplacian and random fields
268(2)
Harmonic functions and the Laplacian
270(2)
Using the Laplacian for regularization
272(2)
Transduction to induction
274(1)
Bibliographic notes
275(2)
Bibliography 277(24)
Index 301


Abney, Steven