Muutke küpsiste eelistusi

E-raamat: Computational Trust Models and Machine Learning

Edited by (Nanyang Technological University, Singapore), Edited by (Singapore Management University), Edited by (École Polytechnique Fédérale de Lausanne, Switzerland)
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 55,89 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

"This book provides an introduction to computational trust models from a machine learning perspective. After reviewing traditional computational trust models, it discusses a new trend of applying formerly unused machine learning methodologies, such as supervised learning. The application of various learning algorithms, such as linear regression, matrix decomposition, and decision trees, illustrates how to translate the trust modeling problem into a (supervised) learning problem. The book also shows how novel machine learning techniques can improve the accuracy of trust assessment compared to traditional approaches"--

Trust has always been necessary for commerce to flow, and now that so much commerce is being conducted electronically, physical senses and personal interactions that were the basis of trust in the past are no longer available. Information scientists describe how trust can be adapted to electronic commerce in a quantitative manner by pursuing a data-driven methodology. They cover trust in online communities, judging the veracity of claims and reliability of sources, Web credibility assessment, trust-aware recommender systems, and biases in trust-based systems. Annotation ©2015 Ringgold, Inc., Portland, OR (protoview.com)

Computational Trust Models and Machine Learning provides a detailed introduction to the concept of trust and its application in various computer science areas, including multi-agent systems, online social networks, and communication systems. Identifying trust modeling challenges that cannot be addressed by traditional approaches, this book:

  • Explains how reputation-based systems are used to determine trust in diverse online communities
  • Describes how machine learning techniques are employed to build robust reputation systems
  • Explores two distinctive approaches to determining credibility of resources—one where the human role is implicit, and one that leverages human input explicitly
  • Shows how decision support can be facilitated by computational trust models
  • Discusses collaborative filtering-based trust aware recommendation systems
  • Defines a framework for translating a trust modeling problem into a learning problem
  • Investigates the objectivity of human feedback, emphasizing the need to filter out outlying opinions

Computational Trust Models and Machine Learning effectively demonstrates how novel machine learning techniques can improve the accuracy of trust assessment.

List of Figures
xiii
List of Tables
xv
Preface xvii
About the Editors xxi
Contributors xxiii
1 Introduction
1(18)
1.1 Overview
1(1)
1.2 What Is Trust?
2(2)
1.3 Computational Trust
4(13)
1.3.1 Computational Trust Modeling: A Review
4(2)
1.3.1.1 Summation and Average
6(1)
1.3.1.2 Bayesian Inference
7(1)
1.3.1.3 Web of Trust
8(2)
1.3.1.4 Iterative Methods
10(1)
1.3.2 Machine Learning for Trust Modeling
11(1)
1.3.2.1 A Little Bit about Machine Learning
11(1)
1.3.2.2 Machine Learning for Trust
12(5)
1.4 Structure of the Book
17(2)
2 Trust in Online Communities
19(20)
2.1 Introduction
19(1)
2.2 Trust in E-Commerce Environments
20(5)
2.3 Trust in Search Engines
25(2)
2.4 Trust in P2P Information Sharing Networks
27(4)
2.5 Trust in Service-Oriented Environments
31(2)
2.6 Trust in Social Networks
33(3)
2.7 Discussion
36(3)
3 Judging the Veracity of Claims and Reliability of Sources
39(34)
3.1 Introduction
40(3)
3.2 Related Work
43(3)
3.2.1 Foundations of Trust
43(1)
3.2.2 Consistency in Information Extraction
44(1)
3.2.2.1 Local Consistency
44(1)
3.2.2.2 Global Consistency
44(1)
3.2.3 Source Dependence
45(1)
3.2.3.1 Comparison to Credibility Analysis
45(1)
3.2.4 Comparison to Other Trust Mechanisms
46(1)
3.3 Fact-Finding
46(4)
3.3.1 Priors
47(1)
3.3.2 Fact-Finding Algorithms
48(1)
3.3.2.1 Sums (Hubs and Authorities)
48(1)
3.3.2.2 Average-Log
48(1)
3.3.2.3 Investment
48(1)
3.3.2.4 PooledInvestment
49(1)
3.3.2.5 TruthFinder
49(1)
3.3.2.6 3-Estimates
49(1)
3.4 Generalized Constrained Fact-Finding
50(1)
3.5 Generalized Fact-Finding
50(8)
3.5.1 Rewriting Fact-Finders for Assertion Weights
51(1)
3.5.1.1 Generalized Sums (Hubs and Authorities)
51(1)
3.5.1.2 Generalized Average-Log
51(1)
3.5.1.3 Generalized Investment
52(1)
3.5.1.4 Generalized PooledInvestment
52(1)
3.5.1.5 Generalized TruthFinder
52(1)
3.5.1.6 Generalized 3-Estimates
52(1)
3.5.2 Encoding Information in Weighted Assertions
53(1)
3.5.2.1 Uncertainty in Information Extraction
53(1)
3.5.2.2 Uncertainty of the Source
53(1)
3.5.2.3 Similarity between Claims
54(1)
3.5.2.4 Group Membership via Weighted Assertions
54(1)
3.5.3 Encoding Groups and Attributes as Layers of Graph Nodes
55(1)
3.5.3.1 Source Domain Expertise
56(2)
3.5.3.2 Additional Layers versus Weighted Edges
58(1)
3.6 Constrained Fact-Finding
58(4)
3.6.1 Prepositional Linear Programming
58(1)
3.6.2 Cost Function
59(1)
3.6.3 Values → Votes → Belief
60(1)
3.6.4 LP Decomposition
60(1)
3.6.5 Tie Breaking
61(1)
3.6.6 "Unknown" Augmentation
61(1)
3.7 Experimental Results
62(8)
3.7.1 Data
62(1)
3.7.1.1 Population
62(1)
3.7.1.2 Books
63(1)
3.7.1.3 Biography
63(1)
3.7.1.4 American vs. British Spelling
63(1)
3.7.2 Experimental Setup
63(1)
3.7.3 Generalized Fact-Finding
64(1)
3.7.3.1 Tuned Assertion Certainty
64(1)
3.7.3.2 Uncertainty in Information Extraction
65(1)
3.7.3.3 Groups as Weighted Assertions
65(1)
3.7.3.4 Groups as Additional Layers
66(1)
3.7.4 Constrained Fact-Finding
67(1)
3.7.4.1 IBT vs. L+I
67(1)
3.7.4.2 City Population
67(1)
3.7.4.3 Synthetic City Population
68(1)
3.7.4.4 Basic Biographies
69(1)
3.7.4.5 American vs. British Spelling
69(1)
3.7.5 The Joint Generalized Constrained Fact-Finding Frame-work
70(1)
3.8 Conclusion
70(3)
4 Web Credibility Assessment
73(50)
4.1 Introduction
74(1)
4.2 Web Credibility Overview
75(6)
4.2.1 What Is Web Credibility?
75(1)
4.2.2 Introduction to Research on Credibility
76(1)
4.2.3 Current Research
77(2)
4.2.4 Definitions Used in This
Chapter
79(1)
4.2.4.1 Information Credibility
79(1)
4.2.4.2 Information Controversy
79(1)
4.2.4.3 Credibility Support for Various Types of Information
80(1)
4.3 Data Collection
81(9)
4.3.1 Collection Means
81(1)
4.3.1.1 Existing Datasets
81(1)
4.3.1.2 Data from Tools Supporting Credibility Evaluation
82(1)
4.3.1.3 Data from Labelers
82(1)
4.3.2 Supporting Web Credibility Evaluation
83(1)
4.3.2.1 Support User's Expertise
84(1)
4.3.2.2 Crowdsourcing Systems
84(1)
4.3.2.3 Databases, Search Engines, Antiviruses and Lists of Pre-Scanned Sites
85(1)
4.3.2.4 Certification, Signatures and Seals
85(1)
4.3.3 Reconcile -- A Case Study
86(4)
4.4 Analysis of Content Credibility Evaluations
90(12)
4.4.1 Subjectivity
90(3)
4.4.2 Consensus and Controversy
93(4)
4.4.3 Cognitive Bias
97(1)
4.4.3.1 Omnipresent Negative Skew -- Shift Towards Positive
97(2)
4.4.3.2 Users Characteristics Affecting Credibility Evaluation -- Selected Personality Traits
99(1)
4.4.3.3 Users Characteristics Affecting Credibility Evaluation -- Cognitive Heuristics
100(2)
4.5 Aggregation Methods -- What Is The Overall Credibility?
102(7)
4.5.1 How to Measure Credibility
102(1)
4.5.2 Standard Aggregates
103(4)
4.5.3 Combating Bias -- Whose Vote Should Count More?
107(2)
4.6 Classifying Credibility Evaluations Using External Web Content Features
109(14)
4.6.1 How We Get Values of Outcome Variable
109(1)
4.6.2 Motivation for Building a Feature-Based Classifier of Webpages Credibility
110(1)
4.6.3 Classification of Web Pages Credibility -- Related Work
110(1)
4.6.4 Dealing with Controversy Problem
111(1)
4.6.5 Aggregation of Evaluations
112(1)
4.6.6 Features
113(2)
4.6.7 Results of Experiments with Building of Classifier Determining whether a Webpage Is Highly Credible (HC), Neutral (N) or Highly Not Credible (HNC)
115(3)
4.6.8 Results of Experiments with Build of Binary Classifier Determining whether Webpage Is Credible or Not
118(2)
4.6.9 Results of Experiments with Build of Binary Classifier of Controversy
120(1)
4.6.10 Summary and Improvement Suggestions
120(3)
5 Trust-Aware Recommender Systems
123(34)
5.1 Recommender Systems
124(11)
5.1.1 Content-Based Recommendation
126(1)
5.1.2 Collaborative Filtering (CF)
127(1)
5.1.2.1 Memory-Based Collaborative Filtering
128(2)
5.1.2.2 Model-Based Collaborative Filtering
130(1)
5.1.3 Hybrid Recommendation
130(1)
5.1.4 Evaluating Recommender Systems
131(2)
5.1.5 Challenges of Recommender Systems
133(1)
5.1.5.1 Cold Start
133(1)
5.1.5.2 Data Sparsity
133(1)
5.1.5.3 Attacks
134(1)
5.1.6 Summary
134(1)
5.2 Computational Models of Trust in Recommender Systems
135(10)
5.2.1 Definition and Properties
135(1)
5.2.1.1 Notations
135(1)
5.2.1.2 Trust Networks
136(1)
5.2.1.3 Properties of Trust
136(2)
5.2.2 Global and Local Trust Metrics
138(1)
5.2.3 Inferring Trust Values
139(1)
5.2.3.1 Inferring Trust in Binary Trust Networks
140(1)
5.2.3.2 Inferring Trust in Continuous Trust Networks
141(3)
5.2.3.3 Inferring Implicit Trust Values
144(1)
5.2.3.4 Trust Aggregation
145(1)
5.2.4 Summary
145(1)
5.3 Incorporating Trust in Recommender Systems
145(10)
5.3.1 Trust-Aware Memory-Based CF Systems
148(1)
5.3.1.1 Trust-Aware Filtering
148(1)
5.3.1.2 Trust-Aware Weighting
149(2)
5.3.2 Trust-Aware Model-Based CF Systems
151(2)
5.3.3 Recommendation Using Distrust Information
153(1)
5.3.4 Advantages of Trust-Aware Recommendation
154(1)
5.3.5 Research Directions of Trust-Aware Recommendation
154(1)
5.4 Conclusion
155(2)
6 Biases in Trust-Based Systems
157(18)
6.1 Introduction
157(1)
6.2 Types of Biases
158(3)
6.2.1 Cognitive Bias
158(2)
6.2.2 Spam
160(1)
6.3 Detection of Biases
161(7)
6.3.1 Unsupervised Approaches
161(4)
6.3.2 Supervised Approaches
165(3)
6.4 Lessening the Impact of Biases
168(4)
6.4.1 Avoidance
168(1)
6.4.2 Aggregation
168(1)
6.4.3 Compensation
169(3)
6.4.4 Elimination
172(1)
6.5 Summary
172(3)
Bibliography 175(28)
Index 203
Xin Liu is currently a postdoctoral researcher in the Laboratoire de Systèmes d'Informations Répartis, led by Professor Karl Aberer, at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. Before joining EPFL, Xin received his Ph.D in computer science from Nanyang Technological University in Singapore, supervised by Associate Professor Anwitaman Datta. His current research interests include recommender systems, trust and reputation systems, social computing, and distributed computing. His papers have been accepted at several prestigious academic events, and he has been a program committee member and reviewer for numerous international conferences and journals.

Anwitaman Datta is an associate professor at Nanyang Technological University, Singapore, where he leads the Self-* Aspects of Networked and Distributed Systems Research Group and teaches courses on security management and cryptography and network security. Well published, he has focused his research on P2P storage, decentralized online social networks, structured overlays, and computational trust. His current research interests include the design of resilient large-scale distributed systems, coding for storage, security and privacy, and social media analysis. His projects have been funded by the Singapore Ministry of Education, HP Labs Innovation Research Award, and more.

Ee-Peng Lim is a professor at Singapore Management University (SMU), co-director of the SMU/Carnegie Mellon University Living Analytics Research Center, and associate editor of numerous journals and publications. He holds a Ph.D from the University of Minnesota, Minneapolis, USA and a B.Sc from the National University of Singapore. His current research interests include social network and web mining, information integration, and digital libraries. A former ACM Publications Board member, he currently serves on the steering committees of the International Conference on Asian Digital Libraries, Pacific Asia Conference on Knowledge Discovery and Data Mining, and International Conference on Social Informatics.