RT Journal Article
JF IEEE Transactions on Knowledge & Data Engineering
YR 2009
VO 22
SP 1286
TI Credibility: How Agents Can Handle Unfair Third-Party Testimonies in Computational Trust Models
A1 Zhiqi Shen,
A1 Cyril Leung,
A1 Chunyan Miao,
A1 Jianshu Weng,
A1 Angela Goh Eck Soong,
K1 Agent
K1 trust
K1 credibility
K1 unfair testimonies.
AB Usually, agents within multiagent systems represent different stakeholders that have their own distinct and sometimes conflicting interests and objectives. They would behave in such a way so as to achieve their own objectives, even at the cost of others. Therefore, there are risks in interacting with other agents. A number of computational trust models have been proposed to manage such risk. However, the performance of most computational trust models that rely on third-party recommendations as part of the mechanism to derive trust is easily deteriorated by the presence of unfair testimonies. There have been several attempts to combat the influence of unfair testimonies. Nevertheless, they are either not readily applicable since they require additional information which is not available in realistic settings, or ad hoc as they are tightly coupled with specific trust models. Against this background, a general credibility model is proposed in this paper. Empirical studies have shown that the proposed credibility model is more effective than related work in mitigating the adverse influence of unfair testimonies.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1041-4347
LA English
DO 10.1109/TKDE.2009.138
LK http://doi.ieeecomputersociety.org/10.1109/TKDE.2009.138