Patricia Kahr

PhD Candidate


Curriculum vitae


p.k.kahr [at] tue [dot] nl


Human-Technology Interaction Group

Eindhoven University of Technology



It Seems Smart, but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task


Journal article


Patricia K. Kahr, G. Rooks, M. Willemsen, Chris C. P. Snijders
International Conference on Intelligent User Interfaces, 2023

Semantic Scholar DBLP DOI
Cite

Cite

APA   Click to copy
Kahr, P. K., Rooks, G., Willemsen, M., & Snijders, C. C. P. (2023). It Seems Smart, but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task. International Conference on Intelligent User Interfaces.


Chicago/Turabian   Click to copy
Kahr, Patricia K., G. Rooks, M. Willemsen, and Chris C. P. Snijders. “It Seems Smart, but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task.” International Conference on Intelligent User Interfaces (2023).


MLA   Click to copy
Kahr, Patricia K., et al. “It Seems Smart, but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task.” International Conference on Intelligent User Interfaces, 2023.


BibTeX   Click to copy

@article{patricia2023a,
  title = {It Seems Smart, but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task},
  year = {2023},
  journal = {International Conference on Intelligent User Interfaces},
  author = {Kahr, Patricia K. and Rooks, G. and Willemsen, M. and Snijders, Chris C. P.}
}

Abstract

Humans increasingly interact with AI systems, and successful interactions rely on individuals trusting such systems (when appropriate). Considering that trust is fragile and often cannot be restored quickly, we focus on how trust develops over time in a human-AI-interaction scenario. In a 2x2 between-subject experiment, we test how model accuracy (high vs. low) and type of explanation (human-like vs. not) affect trust in AI over time. We study a complex decision-making task in which individuals estimate jail time for 20 criminal law cases with AI advice. Results show that trust is significantly higher for high-accuracy models. Also, behavioral trust does not decline, and subjective trust even increases significantly with high accuracy. Human-like explanations did not generally affect trust but boosted trust in high-accuracy models.


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in