Muutke küpsiste eelistusi

E-raamat: Neural Machine Translation

(The Johns Hopkins University)
  • Formaat: PDF+DRM
  • Ilmumisaeg: 18-Jun-2020
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781108601764
  • Formaat - PDF+DRM
  • Hind: 75,32 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 18-Jun-2020
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781108601764

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Deep learning is revolutionizing how machine translation systems are built today. This introduction to machine translation starts from the basics of neural network methods and reaches the state of the art, while giving illuminating historical, linguistic, and applied context. Code examples in Python give a hands-on blueprint for implementation.

Deep learning is revolutionizing how machine translation systems are built today. This book introduces the challenge of machine translation and evaluation - including historical, linguistic, and applied context -- then develops the core deep learning methods used for natural language applications. Code examples in Python give readers a hands-on blueprint for understanding and implementing their own machine translation systems. The book also provides extensive coverage of machine learning tricks, issues involved in handling various forms of data, model enhancements, and current challenges and methods for analysis and visualization. Summaries of the current research in the field make this a state-of-the-art textbook for undergraduate and graduate classes, as well as an essential reference for researchers and developers interested in other applications of neural methods in the broader field of human language processing.

Arvustused

'This book can essentially be viewed as an important contribution to the increasingly important area of neural MT, which will be a great help to NLP researchers, scientists, academics, undergraduate or postgraduate students, and MT researchers and users in particular.' Wandri Jooste, Rejwanul Haque, and Andy Way, Machine Translation 'This book can essentially be viewed as an important contribution to the increasingly important area of neural MT, which will be a great help to NLP researchers, scientists, academics, undergraduate or postgraduate students, and MT researchers and users in particular.' Wandri Jooste, Rejwanul Haque,·Andy Way, Machine Translation

Muu info

Learn how to build machine translation systems with deep learning from the ground up, from basic concepts to cutting-edge research.
Preface xi
Reading Guide xiii
Part I Introduction
1(64)
1 The Translation Problem
3(16)
1.1 Coals of Translation
4(1)
1.2 Ambiguity
5(3)
1.3 The Linguistic View
8(3)
1.4 The Data View
11(4)
1.5 Practical Issues
15(4)
2 Uses of Machine Translation
19(10)
2.1 Information Access
19(1)
2.2 Aiding Human Translators
20(3)
2.3 Communication
23(3)
2.4 Natural Language Processing Pipelines
26(1)
2.5 Multimodal Machine Translation
27(2)
3 History
29(12)
3.1 Neural Networks
30(3)
3.2 Machine Translation
33(8)
4 Evaluation
41(24)
4.1 Task-Based Evaluation
41(4)
4.2 Human Assessments
45(7)
4.3 Automatic Metrics
52(7)
4.4 Metrics Research
59(6)
Part II Basics
65(104)
5 Neural Networks
67(22)
5.1 Linear Models
67(1)
5.2 Multiple Layers
68(1)
5.3 Nonlinearity
69(1)
5.4 Inference
70(2)
5.5 Back-Propagation Training
72(7)
5.6 Exploiting Parallel Processing
79(1)
5.7 Hands On: Neural Networks in Python
80(9)
6 Computation Graphs
89(14)
6.1 Neural Networks as Computation Graphs
89(2)
6.2 Gradient Computations
91(4)
6.3 Hands On: Deep Learning Frameworks
95(8)
7 Neural Language Models
103(22)
7.1 Feed-Forward Neural Language Models
103(4)
7.2 Word Embeddings
107(2)
7.3 Noise Contrastive Estimation
109(1)
7.4 Recurrent Neural Language Models
110(2)
7.5 Long Short-Term Memory Models
112(3)
7.6 Gated Recurrent Units
115(1)
7.7 Deep Models
116(2)
7.8 Hands On: Neural Language Models in PyTorch
118(5)
7.9 Further Readings
123(2)
8 Neural Translation Models
125(18)
8.1 Encoder-Decoder Approach
125(1)
8.2 Adding an Alignment Model
126(4)
8.3 Training
130(3)
8.4 Deep Models
133(3)
8.5 Hands On: Neural Translation Models in PyTorch
136(5)
8.6 Further Readings
141(2)
9 Decoding
143(26)
9.1 Beam Search
143(4)
9.2 Ensemble Decoding
147(1)
9.3 Reranking
148(6)
9.4 Optimizing Decoding
154(1)
9.5 Directing Decoding
155(3)
9.6 Hands On: Decoding in Python
158(5)
9.7 Further Readings
163(6)
Part III Refinements
169(174)
10 Machine Learning Tricks
171(22)
10.1 Failures in Machine Learning
171(3)
10.2 Ensuring Randomness
174(2)
10.3 Adjusting the Learning Rate
176(3)
10.4 Avoiding Local Optima
179(3)
10.5 Addressing Vanishing and Exploding Gradients
182(4)
10.6 Sentence-Level Optimization
186(3)
10.7 Further Readings
189(4)
11 Alternate Architectures
193(20)
11.1 Components of Neural Networks
193(6)
11.2 Attention Models
199(4)
11.3 Convolutional Machine Translation
203(2)
11.4 Convolutional Neural Networks with Attention
205(2)
11.5 Self-Attention: Transformer
207(4)
11.6 Further Readings
211(2)
12 Revisiting Words
213(26)
12.1 Word Embeddings
214(6)
12.2 Multilingual Word Embeddings
220(4)
12.3 Large Vocabularies
224(5)
12.4 Character-Based Models
229(4)
12.5 Further Readings
233(6)
13 Adaptation
239(24)
13.1 Domains
239(6)
13.2 Mixture Models
245(4)
13.3 Subsampling
249(4)
13.4 Fine-Tuning
253(4)
13.5 Further Readings
257(6)
14 Beyond Parallel Corpora
263(18)
14.1 Using Monolingual Data
264(5)
14.2 Multiple Language Pairs
269(3)
14.3 Training on Related Tasks
272(2)
14.4 Further Readings
274(7)
15 Linguistic Structure
281(12)
15.1 Guided Alignment Training
282(2)
15.2 Modeling Coverage
284(3)
15.3 Adding Linguistic Annotation
287(3)
15.4 Further Readings
290(3)
16 Current Challenges
293(18)
16.1 Domain Mismatch
294(1)
16.2 Amount of Training Data
295(1)
16.3 Rare Words
296(3)
16.4 Noisy Data
299(5)
16.5 Beam Search
304(2)
16.6 Word Alignment
306(1)
16.7 Further Readings
307(4)
17 Analysis and Visualization
311(18)
17.1 Error Analysis
311(8)
17.2 Visualization
319(9)
17.3 Probing Representations
328(1)
1 7.4 Identifying Neurons
329(14)
17.5 Tracing Decisions Back to Inputs
332(5)
17.6 Further Readings
337(6)
Bibliography 343(32)
Author Index 375(10)
Index 385
Philipp Koehn is a leading researcher in the field of machine translation and Professor of Computer Science at Johns Hopkins University. In 2010 he authored the textbook Statistical Machine Translation (Cambridge). He received the Award of Honor from the International Association for Machine Translation and was one of three finalists for the European Inventor Award of the European Patent Office in 2013. Professor Koehn also works actively in industry as Chief Scientist for Omniscien Technology and as a consultant for Facebook.