Muutke küpsiste eelistusi

E-raamat: Algebraic Structures in Natural Language [Taylor & Francis e-raamat]

Edited by (Univ. of Gothenburg, Vastra Gotaland), Edited by (Queen Mary University of London, UK)
  • Formaat: 290 pages, 19 Tables, black and white; 25 Line drawings, color; 14 Line drawings, black and white; 25 Illustrations, color; 14 Illustrations, black and white
  • Ilmumisaeg: 23-Dec-2022
  • Kirjastus: CRC Press
  • ISBN-13: 9781003205388
  • Taylor & Francis e-raamat
  • Hind: 106,17 €*
  • * hind, mis tagab piiramatu üheaegsete kasutajate arvuga ligipääsu piiramatuks ajaks
  • Tavahind: 151,67 €
  • Säästad 30%
  • Formaat: 290 pages, 19 Tables, black and white; 25 Line drawings, color; 14 Line drawings, black and white; 25 Illustrations, color; 14 Illustrations, black and white
  • Ilmumisaeg: 23-Dec-2022
  • Kirjastus: CRC Press
  • ISBN-13: 9781003205388
"Algebraic Structures in Natural Language addresses a central problem in cognitive science, concerning the learning procedures through which humans acquire and represent natural language. Until recently algebraic systems have dominated the study of natural language in formal and computational linguistics, AI, and the of psychology of language, with linguistic knowledge seen as encoded in formal grammars, model theories, proof theories, and other rule driven devices, and researchers drawing conclusions about how humans acquire and represent language. Recent work on deep learning has produced an increasingly powerful set of general learning mechanisms which do not apply algebraic models of representation (although they can be combined with them), and success in NLP in particular has led some researchers to question the role of algebraic models in the study of human language acquisition and linguistic representation. Psychologists and cognitive scientists have also been exploring explanations of language evolution and language acquisition that rely on probabilistic methods, social interaction, and information theory, rather than on formal models of grammar induction. This work has also led some researchers to question the centrality of algebraic approaches to linguistic representation. This book addresses the learning procedures through which humans acquire natural language, and the way in which they represent its properties. It brings together leading researchers from computational linguistics, psychology,behavioural science, and mathematical linguistics to consider the significance of non-algebraic methods for the study of natural language, and represents a wide spectrum of views, from the claim that algebraic systems are largely irrelevant, to the contrary position that non-algebraic learning methods are engineering devices for efficiently identifying the patterns that underlying grammars and semantic models generate for natural language input. There are interesting and important perspectives that fall at intermediate points between these opposing approaches, and they may combine elements of both. It will appeal to researchers and advanced students in each of these fields, as well as to anyone who wants to learn more about the relationship between algorithms and language"--

Algebraic Structures in Natural Language by bringing together leading researchers from computational and mathematical linguistics, psychology and behavioural science, addresses the learning procedures through which humans acquire natural language, and the way in which they represent its properties.  

Preface ix
Contributors xi
Introduction xv
Chapter 1 On the Proper Role of Linguistically Oriented Deep Net Analysis in Linguistic Theorising
1(16)
Marco Baroni
Chapter 2 "What Artificial Neural Networks Can Tell Us about Human Language Acquisition
17(44)
Alex Warstadt
Samuel R. Bowman
Chapter 3 Grammar through Spontaneous Order
61(16)
Nick Chater
Morten H. Christiansen
Chapter 4 Language is Acquired in Interaction
77(18)
Eve V. Clark
Chapter 5 Why Algebraic Systems aren't Sufficient for Syntax
95(18)
Ben Ambridge
Chapter 6 Learning Syntactic Structures from String Input
113(26)
Ethan Gotlieb Wilcox
Jon Gauthier
Jennifer Hu
Peng Qian
Roger Levy
Chapter 7 "Analysing Discourse Knowledge in Pre-Trained LMs
139(24)
Sharid Loaiciga
Chapter 8 Linguistically Guided Multilingual NLP
163(26)
Olga Majewska
Ivan Vulic
Anna Korhonen
Chapter 9 Word Embeddings are Word Story Embeddings (and That's Fine)
189(30)
Katrin Erk
Gabriella Chronis
Chapter 10 "Algebra and Language: Reasons for (Dis)content
219(24)
Lawrence S. Moss
Chapter 11 Unitary Recurrent Networks
243(36)
Jean-Philippe Bernardy
Shalom Lappin
Index 279
Shalom Lappin is a Professor of Computational Linguistics at the University of Gothenburg, Professor of Natural Language Processing at Queen Mary University of London and Emeritus Professor of Computational Linguistics at Kings College London. His research focuses on the application of machine learning and probabilistic models to the representation and the acquisition of linguistic knowledge.

Jean-Philippe Bernardy is a researcher at the University of Gothenburg. His main research interest is in interpretable linguistic models, in particular, those built from first principles of algebra, probability and geometry.