Evaluating trustworthiness from past performances: Interval-based approaches

Publication Type:
Conference Proceeding
Citation:
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2008, 5291 LNAI pp. 33 - 46
Issue Date:
2008-12-01
Filename Description Size
Thumbnail2013007830OK.pdf Published version477.94 kB
Adobe PDF
Full metadata record
In many multi-agent systems, the user has to decide whether he (or she) sufficiently trusts a certain agent to achieve a certain goal. To help users to make such decisions, an increasing number of trust systems have been developed. By trust system, we mean a system that gathers information about an agent and evaluates its trustworthiness from this information. The aim of the present paper is to develop new trust systems that overcome limitations of existing ones. This is a challenging problem that raises questions such as: how trustworthiness may be represented, and from which information it may be estimated? We assume that a set of grades describing the past performances of the agent is given. With this common basis, two approaches are proposed. In the first one, the aim is to construct an interval that summarizes the grades. Such an interval gives a good account of the trustworthiness of the agent. We establish axioms that should be satisfied by summarizing methods, devise a particular method based on pulling, and check that it satisfies the axioms, which provides theoretical justifications for it. In the second approach, which is more briefly presented, a level of trust as the certainty that a future grade will be good, and a level of distrust as the fear that a future grade may be bad, are computed on the basis of the past grades. This approach is based on possibility theory and provides, thanks to the two levels, another view of trustworthiness, as well as summarizing intervals. © 2008 Springer-Verlag.
Please use this identifier to cite or link to this item: