Muutke küpsiste eelistusi

Deep Learning and Linguistic Representation [Pehme köide]

(Queen Mary University of London, UK)
  • Formaat: Paperback / softback, 168 pages, kõrgus x laius: 234x156 mm, kaal: 240 g, 23 Tables, black and white; 55 Illustrations, black and white
  • Sari: Chapman & Hall/CRC Machine Learning & Pattern Recognition
  • Ilmumisaeg: 27-Apr-2021
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 0367648741
  • ISBN-13: 9780367648749
  • Formaat: Paperback / softback, 168 pages, kõrgus x laius: 234x156 mm, kaal: 240 g, 23 Tables, black and white; 55 Illustrations, black and white
  • Sari: Chapman & Hall/CRC Machine Learning & Pattern Recognition
  • Ilmumisaeg: 27-Apr-2021
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 0367648741
  • ISBN-13: 9780367648749
"The application of deep learning methods to problems in natural language processing has generated significant progress across a wide range of natural language processing tasks. For some of these applications, deep learning models now approach or surpasshuman performance. While the success of this approach has transformed the engineering methods of machine learning in artificial intelligence, the significance of these achievements for the modelling of human learning and representation remains unclear. Deep Learning and Linguistic Representation looks at the application of a variety of deep learning systems to several cognitively interesting NLP tasks. It also considers the extent to which this work illuminates our understanding of the way in which humans acquire and represent linguistic knowledge"--

The application of deep learning methods to problems in natural language processing has generated significant progress across a wide range of natural language processing tasks. For some of these applications, deep learning models now approach or surpass human performance. While the success of this approach has transformed the engineering methods of machine learning in artificial intelligence, the significance of these achievements for the modelling of human learning and representation remains unclear.

Deep Learning and Linguistic Representation looks at the application of a variety of deep learning systems to several cognitively interesting NLP tasks. It also considers the extent to which this work illuminates our understanding of the way in which humans acquire and represent linguistic knowledge.

Key Features:

  • combines an introduction to deep learning in AI and NLP with current research on Deep Neural Networks in computational linguistics.
  • is self-contained and suitable for teaching in computer science, AI, and cognitive science courses; it does not assume extensive technical training in these areas.
  • provides a compact guide to work on state of the art systems that are producing a revolution across a range of difficult natural language tasks.

Arvustused

This book is a very timely synthesis of classical linguistics that the author has worked in for several decades and the modern revolution in NLP enabled by Deep Learning. It also asks provocative foundational questions about whether traditional grammars are the most suitable representations of linguistic structure or if we need to go beyond them.

-- Devdatt Dubhashi, Professor, Chalmers University

Deep neural networks are having a tremendous impact on applied natural language processing. In this clearly written book, Shalom Lappin tackles the novel and exciting question of what are their implications for theories of language acquisition, representation and usage. This will be an enlightening reading for anybody interested in language from the perspectives of theoretical linguistics, cognitive science, AI and the philosophy of science.

-- Marco Baroni, ICREA Research Professor, Facebook AI Research Scientist

Preface xi
Chapter 1 Introduction: Deep Learning in Natural Language Processing
1(22)
1.1 Outline Of The Book
1(3)
1.2 From Engineering To Cognitive Science
4(3)
1.3 Elements Of Deep Learning
7(3)
1.4 Types Of Deep Neural Networks
10(7)
1.5 An Example Application
17(4)
1.6 Summary And Conclusions
21(2)
Chapter 2 Learning Syntactic Structure With Deep Neural Networks
23(22)
2.1 Subject-Verb Agreement
23(1)
2.2 Architecture And Experiments
24(10)
2.3 Hierarchical Structure
34(5)
2.4 Tree Dnns
39(3)
2.5 Summary And Conclusions
42(3)
Chapter 3 Machine Learning And The Sentence Acceptability Task
45(24)
3.1 Gradience In Sentence Acceptability
45(6)
3.2 Predicting Acceptability With Machine Learning Models
51(11)
3.3 Adding Tags And Trees
62(4)
3.4 Summary And Conclusions
66(3)
Chapter 4 Predicting Human Acceptability Judgements In Context
69(20)
4.1 Acceptability Judgements In Context
69(6)
4.2 Two Sets Of Experiments
75(3)
4.3 The Compression Effect And Discourse Coherence
78(2)
4.4 Predicting Acceptability With Different DNN Models
80(7)
4.5 Summary And Conclusions
87(2)
Chapter 5 Cognitively Viable Computational Models Of Linguistic Knowledge
89(24)
5.1 How Useful Are Linguistic Theories For NLP Applications?
89(3)
5.2 Machine Learning Models Vs Formal Grammar
92(4)
5.3 Explaining Language Acquisition
96(4)
5.4 Deep Learning And Distributional Semantics
100(8)
5.5 Summary And Conclusions
108(5)
Chapter 6 Conclusions And Future Work
113(10)
6.1 Representing Syntactic And Semantic Knowledge
113(6)
6.2 Domain-Specific Learning Biases And Language Acquisition
119(2)
6.3 Directions For Future Work
121(2)
References 123(16)
Author Index 139(6)
Subject Index 145
Shalom Lappin is Professor of Natural Language Processing at Queen Mary University of London, Professor of Computational Linguistics at the University of Gothenburg and Emeritus Professor of Computational Linguistics at Kings College London.