Muutke küpsiste eelistusi

E-raamat: Knowledge Engineering: Building Cognitive Assistants for Evidence-based Reasoning

(George Mason University, Virginia), (George Mason University, Virginia), (George Mason University, Virginia), (George Mason University, Virginia)
  • Formaat: PDF+DRM
  • Ilmumisaeg: 08-Sep-2016
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781316655580
  • Formaat - PDF+DRM
  • Hind: 87,67 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 08-Sep-2016
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781316655580

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This book presents a significant advancement in the theory and practice of knowledge engineering, the discipline concerned with the development of intelligent agents that use knowledge and reasoning to perform problem solving and decision-making tasks. It covers the main stages in the development of a knowledge-based agent: understanding the application domain, modeling problem solving in that domain, developing the ontology, learning the reasoning rules, and testing the agent. The book focuses on a special class of agents: cognitive assistants for evidence-based reasoning that learn complex problem-solving expertise directly from human experts, support experts, and nonexperts in problem solving and decision making, and teach their problem-solving expertise to students. A powerful learning agent shell, Disciple-EBR, is included with the book, enabling students, practitioners, and researchers to develop cognitive assistants rapidly in a wide variety of domains that require evidence-based reasoning, including intelligence analysis, cybersecurity, law, forensics, medicine, and education.

Arvustused

'At the pole opposite to statistical machine learning lies disciplined knowledge engineering. This book gives a new and comprehensive journey on the approach to AI as symbol manipulation, putting most of the relevant pieces of knowledge engineering together in a refreshingly interesting and novel way.' Edward Feigenbaum, Stanford University, California 'This well-written book is a much-needed update on the process of building expert systems. Gheorghe Tecuci and colleagues have developed the Disciple framework over many years and are using it here as a pedagogical tool for knowledge engineering. Hands-on exercises provide practical instruction to complement the explanations of principles, both of which make this a useful book for the classroom or self-study.' Bruce G. Buchanan, Emeritus Professor of Computer Science, University of Pittsburgh

Muu info

Using robust software, this book focuses on learning assistants for evidence-based reasoning that learn complex problem solving from humans.
Preface xv
Acknowledgments xxi
About the Authors xxiii
1 Introduction
1(45)
1.1 Understanding the World through Evidence-based Reasoning
1(4)
1.1.1 What Is Evidence?
1(1)
1.1.2 Evidence, Data, and Information
1(1)
1.1.3 Evidence and Fact
2(1)
1.1.4 Evidence and Knowledge
2(3)
1.1.5 Ubiquity of Evidence
5(1)
1.2 Abductive Reasoning
5(4)
1.2.1 From Aristotle to Peirce
5(1)
1.2.2 Peirce and Sherlock Holmes on Abductive Reasoning
6(3)
1.3 Probabilistic Reasoning
9(16)
1.3.1 Enumerative Probabilities: Obtained by Counting
9(1)
1.3.1.1 Aleatory Probability
9(1)
1.3.1.2 Relative Frequency and Statistics
9(2)
1.3.2 Subjective Bayesian View of Probability
11(2)
1.3.3 Belief Functions
13(3)
1.3.4 Baconian Probability
16(1)
1.3.4.1 Variative and Eliminative Inferences
16(1)
1.3.4.2 Importance of Evidential Completeness
17(3)
1.3.4.3 Baconian Probability of Boolean Expressions
20(1)
1.3.5 Fuzzy Probability
20(1)
1.3.5.1 Fuzzy Force of Evidence
20(1)
1.3.5.2 Fuzzy Probability of Boolean Expressions
21(1)
1.3.5.3 On Verbal Assessments of Probabilities
22(1)
1.3.6 A Summary of Uncertainty Methods and What They Best Capture
23(2)
1.4 Evidence-based Reasoning
25(4)
1.4.1 Deduction, Induction, and Abduction
25(1)
1.4.2 The Search for Knowledge
26(1)
1.4.3 Evidence-based Reasoning Everywhere
27(2)
1.5 Artificial Intelligence
29(4)
1.5.1 Intelligent Agents
30(2)
1.5.2 Mixed-Initiative Reasoning
32(1)
1.6 Knowledge Engineering
33(8)
1.6.1 From Expert Systems to Knowledge-based Agents and Cognitive Assistants
33(2)
1.6.2 An Ontology of Problem-Solving Tasks
35(1)
1.6.2.1 Analytic Tasks
36(1)
1.6.2.2 Synthetic Tasks
36(1)
1.6.3 Building Knowledge-based Agents
37(1)
1.6.3.1 How Knowledge-based Agents Are Built and Why It Is Hard
37(2)
1.6.3.2 Teaching as an Alternative to Programming: Disciple Agents
39(1)
1.6.3.3 Disciple-EBR, Disciple-CD, and TIACRITIS
40(1)
1.7 Obtaining Disciple-EBR
41(1)
1.8 Review Questions
42(4)
2 Evidence-based Reasoning: Connecting the Dots
46(37)
2.1 How Easy Is It to Connect the Dots?
46(10)
2.1.1 How Many Kinds of Dots Are There?
47(1)
2.1.2 Which Evidential Dots Can Be Believed?
48(2)
2.1.3 Which Evidential Dots Should Be Considered?
50(1)
2.1.4 Which Evidential Dots Should We Try to Connect?
50(2)
2.1.5 How to Connect Evidential Dots to Hypotheses?
52(2)
2.1.6 What Do Our Dot Connections Mean?
54(2)
2.2 Sample Evidence-based Reasoning Task: Intelligence Analysis
56(8)
2.2.1 Evidence in Search of Hypotheses
56(2)
2.2.2 Hypotheses in Search of Evidence
58(2)
2.2.3 Evidentiary Testing of Hypotheses
60(2)
2.2.4 Completing the Analysis
62(2)
2.3 Other Evidence-based Reasoning Tasks
64(12)
2.3.1 Cyber Insider Threat Discovery and Analysis
64(4)
2.3.2 Analysis of Wide-Area Motion Imagery
68(2)
2.3.3 Inquiry-based Teaching and Learning in a Science Classroom
70(1)
2.3.3.1 Need for Inquiry-based Teaching and Learning
70(1)
2.3.3.2 Illustration of Inquiry-based Teaching and Learning
71(3)
2.3.3.3 Other Examples of Inquiry-based Teaching and Learning
74(2)
2.4 Hands On: Browsing an Argumentation
76(5)
2.5 Project Assignment 1
81(1)
2.6 Review Questions
81(2)
3 Methodologies and Tools for Agent Design and Development
83(30)
3.1 A Conventional Design and Development Scenario
83(5)
3.1.1 Conventional Design and Development Phases
83(1)
3.1.2 Requirements Specification and Domain Understanding
83(2)
3.1.3 Ontology Design and Development
85(1)
3.1.4 Development of the Problem-Solving Rules or Methods
86(1)
3.1.5 Verification, Validation, and Certification
87(1)
3.2 Development Tools and Reusable Ontologies
88(5)
3.2.1 Expert System Shells
88(1)
3.2.2 Foundational and Utility Ontologies and Their Reuse
89(1)
3.2.3 Learning Agent Shells
90(1)
3.2.4 Learning Agent Shell for Evidence-based Reasoning
91(2)
3.3 Agent Design and Development Using Learning Technology
93(14)
3.3.1 Requirements Specification and Domain Understanding
93(1)
3.3.2 Rapid Prototyping
93(7)
3.3.3 Ontology Design and Development
100(1)
3.3.4 Rule Learning and Ontology Refinement
101(3)
3.3.5 Hierarchical Organization of the Knowledge Repository
104(1)
3.3.6 Learning-based Design and Development Phases
105(2)
3.4 Hands On: Loading, Saving, and Closing Knowledge Bases
107(4)
3.5 Knowledge Base Guidelines
111(1)
3.6 Project Assignment 2
111(1)
3.7 Review Questions
112(1)
4 Modeling the Problem-Solving Process
113(42)
4.1 Problem Solving through Analysis and Synthesis
113(1)
4.2 Inquiry-driven Analysis and Synthesis
113(6)
4.3 Inquiry-driven Analysis and Synthesis for Evidence-based Reasoning
119(3)
4.3.1 Hypothesis Reduction and Assessment Synthesis
119(1)
4.3.2 Necessary and Sufficient Conditions
120(1)
4.3.3 Sufficient Conditions and Scenarios
120(1)
4.3.4 Indicators
121(1)
4.4 Evidence-based Assessment
122(2)
4.5 Hands On: Was the Cesium Stolen?
124(6)
4.6 Hands On: Hypothesis Analysis and Evidence Search and Representation
130(3)
4.7 Believability Assessment
133(7)
4.7.1 Tangible Evidence
133(2)
4.7.2 Testimonial Evidence
135(2)
4.7.3 Missing Evidence
137(1)
4.7.4 Authoritative Record
137(1)
4.7.5 Mixed Evidence and Chains of Custody
138(2)
4.8 Hands On: Believability Analysis
140(3)
4.9 Drill-Down Analysis, Assumption-based Reasoning, and What-If Scenarios
143(1)
4.10 Hands On: Modeling, Formalization, and Pattern Learning
144(2)
4.11 Hands On: Analysis Based on Learned Patterns
146(1)
4.12 Modeling Guidelines
147(4)
4.13 Project Assignment 3
151(1)
4.14 Review Questions
152(3)
5 Ontologies
155(19)
5.1 What Is an Ontology?
155(1)
5.2 Concepts and Instances
156(1)
5.3 Generalization Hierarchies
157(1)
5.4 Object Features
158(1)
5.5 Defining Features
158(2)
5.6 Representation of N-ary Features
160(1)
5.7 Transitivity
161(1)
5.8 Inheritance
162(1)
5.8.1 Default Inheritance
162(1)
5.8.2 Multiple Inheritance
162(1)
5.9 Concepts as Feature Values
163(1)
5.10 Ontology Matching
164(1)
5.11 Hands On: Browsing an Ontology
165(3)
5.12 Project Assignment 4
168(1)
5.13 Review Questions
168(6)
6 Ontology Design and Development
174(28)
6.1 Design and Development Methodology
174(1)
6.2 Steps in Ontology Development
174(2)
6.3 Domain Understanding and Concept Elicitation
176(3)
6.3.1 Tutorial Session Delivered by the Expert
177(1)
6.3.2 Ad-hoc List Created by the Expert
177(1)
6.3.3 Book Index
177(1)
6.3.4 Unstructured Interviews with the Expert
177(1)
6.3.5 Structured Interviews with the Expert
177(1)
6.3.6 Protocol Analysis (Think-Aloud Technique)
178(1)
6.3.7 The Card-Sort Method
179(1)
6.4 Modeling-based Ontology Specification
179(1)
6.5 Hands On: Developing a Hierarchy of Concepts and Instances
180(6)
6.6 Guidelines for Developing Generalization Hierarchies
186(3)
6.6.1 Well-Structured Hierarchies
186(1)
6.6.2 Instance or Concept?
187(1)
6.6.3 Specific Instance or Generic Instance?
188(1)
6.6.4 Naming Conventions
188(1)
6.6.5 Automatic Support
189(1)
6.7 Hands On: Developing a Hierarchy of Features
189(3)
6.8 Hands On: Defining Instances and Their Features
192(3)
6.9 Guidelines for Defining Features and Values
195(2)
6.9.1 Concept or Feature?
195(1)
6.9.2 Concept, Instance, or Constant?
196(1)
6.9.3 Naming of Features
196(1)
6.9.4 Automatic Support
197(1)
6.10 Ontology Maintenance
197(1)
6.11 Project Assignment 5
198(1)
6.12 Review Questions
198(4)
7 Reasoning with Ontologies and Rules
202(20)
7.1 Production System Architecture
202(1)
7.2 Complex Ontology-based Concepts
203(1)
7.3 Reduction and Synthesis Rules and the Inference Engine
204(2)
7.4 Reduction and Synthesis Rules for Evidence-based Hypotheses Analysis
206(1)
7.5 Rule and Ontology Matching
207(5)
7.6 Partially Learned Knowledge
212(3)
7.6.1 Partially Learned Concepts
212(1)
7.6.2 Partially Learned Features
213(1)
7.6.3 Partially Learned Hypotheses
214(1)
7.6.4 Partially Learned Rules
214(1)
7.7 Reasoning with Partially Learned Knowledge
215(1)
7.8 Review Questions
216(6)
8 Learning for Knowledge-based Agents
222(30)
8.1 Introduction to Machine Learning
222(5)
8.1.1 What Is Learning?
222(1)
8.1.2 Inductive Learning from Examples
223(1)
8.1.3 Explanation-based Learning
224(1)
8.1.4 Learning by Analogy
225(1)
8.1.5 Multistrategy Learning
226(1)
8.2 Concepts
227(2)
8.2.1 Concepts, Examples, and Exceptions
227(1)
8.2.2 Examples and Exceptions of a Partially Learned Concept
228(1)
8.3 Generalization and Specialization Rules
229(5)
8.3.1 Turning Constants into Variables
230(1)
8.3.2 Turning Occurrences of a Variable into Different Variables
230(1)
8.3.3 Climbing the Generalization Hierarchies
231(1)
8.3.4 Dropping Conditions
231(1)
8.3.5 Extending Intervals
231(1)
8.3.6 Extending Ordered Sets of Intervals
232(1)
8.3.7 Extending Symbolic Probabilities
232(1)
8.3.8 Extending Discrete Sets
232(1)
8.3.9 Using Feature Definitions
233(1)
8.3.10 Using Inference Rules
233(1)
8.4 Types of Generalizations and Specializations
234(4)
8.4.1 Definition of Generalization
234(1)
8.4.2 Minimal Generalization
234(1)
8.4.3 Minimal Specialization
235(1)
8.4.4 Generalization of Two Concepts
236(1)
8.4.5 Minimal Generalization of Two Concepts
236(1)
8.4.6 Specialization of Two Concepts
237(1)
8.4.7 Minimal Specialization of Two Concepts
237(1)
8.5 Inductive Concept Learning from Examples
238(4)
8.6 Learning with an Incomplete Representation Language
242(1)
8.7 Formal Definition of Generalization
243(4)
8.7.1 Formal Representation Language for Concepts
243(2)
8.7.2 Term Generalization
245(1)
8.7.3 Clause Generalization
245(1)
8.7.4 BRU Generalization
246(1)
8.7.5 Generalization of Concepts with Negations
247(1)
8.7.6 Substitutions and the Generalization Rules
247(1)
8.8 Review Questions
247(5)
9 Rule Learning
252(42)
9.1 Modeling, Learning, and Problem Solving
252(1)
9.2 An Illustration of Rule Learning and Refinement
253(4)
9.3 The Rule-Learning Problem
257(1)
9.4 Overview of the Rule-Learning Method
258(2)
9.5 Mixed-Initiative Example Understanding
260(4)
9.5.1 What Is an Explanation of an Example?
260(2)
9.5.2 Explanation Generation
262(2)
9.6 Example Reformulation
264(1)
9.7 Analogy-based Generalization
265(5)
9.7.1 Analogical Problem Solving Based on Explanation Similarity
265(1)
9.7.2 Upper Bound Condition as a Maximally General Analogy Criterion
266(2)
9.7.3 Lower Bound Condition as a Minimally General Analogy Criterion
268(2)
9.8 Rule Generation and Analysis
270(1)
9.9 Generalized Examples
270(1)
9.10 Hypothesis Learning
271(4)
9.11 Hands On: Rule and Hypotheses Learning
275(4)
9.12 Explanation Generation Operations
279(6)
9.12.1 Guiding Explanation Generation
279(1)
9.12.2 Fixing Values
280(1)
9.12.3 Explanations with Functions
280(3)
9.12.4 Explanations with Comparisons
283(2)
9.12.5 Hands On: Explanations with Functions and Comparisons
285(1)
9.13 Guidelines for Rule and Hypothesis Learning
285(4)
9.14 Project Assignment 6
289(1)
9.15 Review Questions
289(5)
10 Rule Refinement
294(35)
10.1 Incremental Rule Refinement
294(15)
10.1.1 The Rule Refinement Problem
294(1)
10.1.2 Overview of the Rule Refinement Method
295(1)
10.1.3 Rule Refinement with Positive Examples
296(1)
10.1.3.1 Illustration of Rule Refinement with a Positive Example
296(2)
10.1.3.2 The Method of Rule Refinement with a Positive Example
298(2)
10.1.3.3 Summary of Rule Refinement with a Positive Example
300(1)
10.1.4 Rule Refinement with Negative Examples
300(1)
10.1.4.1 Illustration of Rule Refinement with Except-When Conditions
300(5)
10.1.4.2 The Method of Rule Refinement with Except-When Conditions
305(1)
10.1.4.3 Illustration of Rule Refinement through Condition Specialization
305(2)
10.1.4.4 The Method of Rule Refinement through Condition Specialization
307(1)
10.1.4.5 Summary of Rule Refinement with a Negative Example
308(1)
10.2 Learning with an Evolving Ontology
309(7)
10.2.1 The Rule Regeneration Problem
309(1)
10.2.2 On-Demand Rule Regeneration
310(2)
10.2.3 Illustration of the Rule Regeneration Method
312(4)
10.2.4 The Rule Regeneration Method
316(1)
10.3 Hypothesis Refinement
316(1)
10.4 Characterization of Rule Learning and Refinement
317(2)
10.5 Hands On: Rule Refinement
319(2)
10.6 Guidelines for Rule Refinement
321(1)
10.7 Project Assignment 7
322(1)
10.8 Review Questions
322(7)
11 Abstraction of Reasoning
329(9)
11.1 Statement Abstraction
329(2)
11.2 Reasoning Tree Abstraction
331(1)
11.3 Reasoning Tree Browsing
331(1)
11.4 Hands On: Abstraction of Reasoning
331(3)
11.5 Abstraction Guideline
334(1)
11.6 Project Assignment 8
335(1)
11.7 Review Questions
335(3)
12 Disciple Agents
338(88)
12.1 Introduction
338(1)
12.2 Disciple-WA: Military Engineering Planning
338(10)
12.2.1 The Workaround Planning Problem
338(3)
12.2.2 Modeling the Workaround Planning Process
341(2)
12.2.3 Ontology Design and Development
343(2)
12.2.4 Rule Learning
345(1)
12.2.5 Experimental Results
346(2)
12.3 Disciple-COA: Course of Action Critiquing
348(16)
12.3.1 The Course of Action Critiquing Problem
348(3)
12.3.2 Modeling the COA Critiquing Process
351(1)
12.3.3 Ontology Design and Development
352(3)
12.3.4 Training the Disciple-COA Agent
355(5)
12.3.5 Experimental Results
360(4)
12.4 Disciple-COG: Center of Gravity Analysis
364(23)
12.4.1 The Center of Gravity Analysis Problem
364(3)
12.4.2 Overview of the Use of Disciple-COG
367(9)
12.4.3 Ontology Design and Development
376(1)
12.4.4 Script Development for Scenario Elicitation
376(4)
12.4.5 Agent Teaching and Learning
380(3)
12.4.6 Experimental Results
383(4)
12.5 Disciple-VPT: Multi-Agent Collaborative Planning
387(39)
12.5.1 Introduction
387(1)
12.5.2 The Architecture of Disciple-VPT
388(1)
12.5.3 The Emergency Response Planning Problem
389(1)
12.5.4 The Disciple-VE Learning Agent Shell
390(4)
12.5.5 Hierarchical Task Network Planning
394(2)
12.5.6 Guidelines for HTN Planning
396(4)
12.5.7 Integration of Planning and Inference
400(3)
12.5.8 Teaching Disciple-VE to Perform Inference Tasks
403(6)
12.5.9 Teaching Disciple-VE to Perform Planning Tasks
409(1)
12.5.9.1 Why Learning Planning Rules Is Difficult
409(1)
12.5.9.2 Learning a Set of Correlated Planning Rules
409(4)
12.5.9.3 The Learning Problem and Method for a Set of Correlated Planning Rules
413(1)
12.5.9.4 Learning Correlated Planning Task Reduction Rules
413(1)
12.5.9.5 Learning Correlated Planning Task Concretion Rules
414(1)
12.5.9.6 Learning a Correlated Action Concretion Rule
415(1)
12.5.10 The Virtual Experts Library
416(4)
12.5.11 Multidomain Collaborative Planning
420(1)
12.5.12 Basic Virtual Planning Experts
421(1)
12.5.13 Evaluation of Disciple-VPT
422(1)
12.5.14 Final Remarks
422(4)
13 Design Principles for Cognitive Assistants
426(17)
13.1 Learning-based Knowledge Engineering
426(1)
13.2 Problem-Solving Paradigm for User-Agent Collaboration
427(1)
13.3 Multi-Agent and Multidomain Problem Solving
427(1)
13.4 Knowledge Base Structuring for Knowledge Reuse
427(1)
13.5 Integrated Teaching and Learning
428(1)
13.6 Multistrategy Learning
428(1)
13.7 Knowledge Adaptation
429(1)
13.8 Mixed-Initiative Modeling, Learning, and Problem Solving
429(1)
13.9 Plausible Reasoning with Partially Learned Knowledge
430(1)
13.10 User Tutoring in Problem Solving
430(1)
13.11 Agent Architecture for Rapid Agent Development
430(1)
13.12 Design Based on a Complete Agent Life Cycle
431(12)
References
433(10)
Appendixes 443(1)
Summary: Knowledge Engineering Guidelines 443(1)
Summary: Operations with Disciple-EBR 444(2)
Summary: Hands-On Exercises 446(1)
Index 447
Gheorghe Tecuci (PhD, University of Paris-South and Polytechnic Institute of Bucharest) is Professor of Computer Science and Director of the Learning Agents Center at George Mason University, Virginia, Member of the Romanian Academy, and former Chair of Artificial Intelligence at the US Army War College. He has published 11 books and more than 190 papers. Dorin Marcu (PhD, George Mason University) is Research Assistant Professor in the Learning Agents Center at George Mason University, Virginia. He collaborated in the development of the Disciple Learning Agent Shell and a series of cognitive assistants based on it for different application domains, such as Disciple-COA (course of action critiquing), Disciple-COG (strategic center of gravity analysis), Disciple-LTA (learning, tutoring, and assistant), and Disciple-EBR (evidence-based reasoning). Mihai Boicu (PhD, George Mason University) is Associate Professor of Information Sciences and Technology and Associate Director of the Learning Agents Center at George Mason University, Virginia. He is the main software architect of the Disciple agent development platform and coordinated the software development of Disciple-EBR. He has received the IAAI Innovative Application Award. David A. Schum (PhD, Ohio State University) is Emeritus Professor of Systems Engineering, Operations Research, and Law, as well as Chief Scientist of the Learning Agents Center at George Mason University, Virginia. He has published more than 100 research papers and 6 books on evidence and probabilistic inference, and is recognized as one of the founding fathers of the emerging Science of Evidence.