Abstract

Robots are rapidly gaining acceptance in recent times, where the general public, industry and researchers are starting to understand the utility of robots, for example for delivery to homes or in hospitals. However, it is key to understand how to instil the appropriate amount of trust in the user. One aspect of a trustworthy system is its ability to explain actions and be transparent, especially in the face of potentially serious errors. Here, we study the various aspects of transparency of interaction and its effect in a scenario where a robot is performing triage when a suspected Covid-19 patient arrives at a hospital. Our findings consolidate prior work showing a main effect of robot errors on trust, but also showing that this is dependent on the level of transparency. Furthermore, our findings indicate that high interaction transparency leads to participants making better informed decisions on their health based on their interaction. Such findings on transparency could inform interaction design and thus lead to greater adoption of robots in key areas, such as health and well-being.

Cite as

Nesset, B., Robb, D., Lopes, J. & Hastie, H. 2021, 'Transparency in HRI: Trust and Decision Making in the Face of Robot Errors', HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction . Association for Computing Machinery, pp. 313-317. https://doi.org/10.1145/3434074.3447183

Downloadable citations

Download HTML citationHTML Download BIB citationBIB Download RIS citationRIS
Last updated: 03 September 2022
Was this page helpful?