Trust in AI: why we should be designing for APPROPRIATE reliance. Academic Article uri icon

Overview

abstract

  • Use of artificial intelligence in healthcare, such as machine learning-based predictive algorithms, holds promise for advancing outcomes, but few systems are used in routine clinical practice. Trust has been cited as an important challenge to meaningful use of artificial intelligence in clinical practice. Artificial intelligence systems often involve automating cognitively challenging tasks. Therefore, previous literature on trust in automation may hold important lessons for artificial intelligence applications in healthcare. In this perspective, we argue that informatics should take lessons from literature on trust in automation such that the goal should be to foster appropriate trust in artificial intelligence based on the purpose of the tool, its process for making recommendations, and its performance in the given context. We adapt a conceptual model to support this argument and present recommendations for future work.

publication date

  • December 28, 2021

Research

keywords

  • Artificial Intelligence
  • Trust

Identity

PubMed Central ID

  • PMC8714273

Scopus Document Identifier

  • 85123227423

Digital Object Identifier (DOI)

  • 10.1093/jamia/ocab238

PubMed ID

  • 34725693

Additional Document Info

volume

  • 29

issue

  • 1