Using trust to determine user decision making & task outcome during a human-agent collaborative task

Publisher:
ACM
Publication Type:
Conference Proceeding
Citation:
ACM/IEEE International Conference on Human-Robot Interaction, 2021, pp. 73-82
Issue Date:
2021-03-08
Filename Description Size
3434073.3444673.pdfPublished version2.04 MB
Adobe PDF
Full metadata record
Optimal performance of collaborative tasks requires consideration of the interactions between socially intelligent agents, such as social robots, and their human counterparts. The functionality and success of these systems lie in their ability to establish and maintain user trust; with too much or too little trust leading to over-reliance and under-utilisation, respectively. This problem highlights the need for an appropriate trust calibration methodology, with the work in this paper focusing on the first step: investigating user trust as a behavioural prior. Two pilot studies (Study 1 and 2) are presented, the results of which inform the design of Study 3. Study 3 investigates whether trust can determine user decision making and task outcome during a human-agent collaborative task. Results demonstrate that trust can be behaviourally assessed in this context using an adapted version of the Trust Game. Further, an initial behavioural measure of trust can significantly predict task outcome. Finally, assistance type and task difficulty interact to impact user performance. Notably, participants were able to improve their performance on the hard task when paired with correct assistance, with this improvement comparable to performance on the easy task with no assistance. Future work will focus on investigating factors that influence user trust during human-agent collaborative tasks and providing a domain-independent model of trust calibration.
Please use this identifier to cite or link to this item: