Patricia Kahr

PhD Candidate


Curriculum vitae


p.k.kahr [at] tue [dot] nl


Human-Technology Interaction Group

Eindhoven University of Technology



The Trust Recovery Journey. The Effect of Timing of Errors on the Willingness to Follow AI Advice.


Journal article


Patricia K. Kahr, G. Rooks, Chris C P Snijders, M. Willemsen
International Conference on Intelligent User Interfaces, 2024

Semantic Scholar DBLP DOI
Cite

Cite

APA   Click to copy
Kahr, P. K., Rooks, G., Snijders, C. C. P., & Willemsen, M. (2024). The Trust Recovery Journey. The Effect of Timing of Errors on the Willingness to Follow AI Advice. International Conference on Intelligent User Interfaces.


Chicago/Turabian   Click to copy
Kahr, Patricia K., G. Rooks, Chris C P Snijders, and M. Willemsen. “The Trust Recovery Journey. The Effect of Timing of Errors on the Willingness to Follow AI Advice.” International Conference on Intelligent User Interfaces (2024).


MLA   Click to copy
Kahr, Patricia K., et al. “The Trust Recovery Journey. The Effect of Timing of Errors on the Willingness to Follow AI Advice.” International Conference on Intelligent User Interfaces, 2024.


BibTeX   Click to copy

@article{patricia2024a,
  title = {The Trust Recovery Journey. The Effect of Timing of Errors on the Willingness to Follow AI Advice.},
  year = {2024},
  journal = {International Conference on Intelligent User Interfaces},
  author = {Kahr, Patricia K. and Rooks, G. and Snijders, Chris C P and Willemsen, M.}
}

Abstract

Complementing human decision-making with AI advice offers substantial advantages. However, humans do not always trust AI advice appropriately and are overly sensitive to incidental AI errors, even in cases with overall good performance. Today’s research still needs to uncover the underlying aspects of trust decline and recovery over time in repeated human-AI interactions. Our work investigates the consequences of incidental AI error on (self-reported) trust and participants’ reliance on AI advice. Results from our experiment, where 208 participants evaluated 14 legal cases before and after receiving algorithmic advice, showed that trust significantly decreased after early and late errors but was rapidly restored in both scenarios. Reliance significantly dropped only for early errors but not for late errors. In both scenarios, reliance was able to be restored. Results suggest that late (compared to early) errors are less drastic in trust loss and allow quicker recovery. These findings align with an interpretation in which humans can build up trust over time if a system is performing well, making them more tolerant of incidental AI errors.


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in