The introduction of artificial intelligence (AI) in healthcare has created novel challenges for the field of medical malpractice. As healthcare professionals increasingly rely on AI in their decision-making processes, traditional medicolegal assessments may struggle to adapt. It is essential to examine AI's role in clinical care - both its current applications and future advancements - to clarify accountability for diagnostic and therapeutic errors. Clinical decision support systems (CDSSs), in particular, unlike other traditional medical technologies, work as co-decision makers alongside physicians. They function through the elaboration of patient information, medical knowledge, learnt patterns, etc., to generate a decision output (e.g., the suggested diagnosis), which should then be evaluated by the physician. In light of the AI Act, CDSSs cannot function fully autonomously, but instead physicians are to be assigned an oversight role. It is questionable, however, whether it would always be appropriate to assign full responsibility, and consequently liability, to the physician. This would be especially true if oversight is limited to reviewing outputs generated by the CDSS in a manner that leaves no real control in the hands of the physician. Future research should aim to define clear liability allocation frameworks and design workflows that ensure effective oversight, thereby preventing unfair liability burdens.

Establishing new boundaries for medical liability: The role of AI as a decision-maker

Boscolo-Berto, Rafael
2025

Abstract

The introduction of artificial intelligence (AI) in healthcare has created novel challenges for the field of medical malpractice. As healthcare professionals increasingly rely on AI in their decision-making processes, traditional medicolegal assessments may struggle to adapt. It is essential to examine AI's role in clinical care - both its current applications and future advancements - to clarify accountability for diagnostic and therapeutic errors. Clinical decision support systems (CDSSs), in particular, unlike other traditional medical technologies, work as co-decision makers alongside physicians. They function through the elaboration of patient information, medical knowledge, learnt patterns, etc., to generate a decision output (e.g., the suggested diagnosis), which should then be evaluated by the physician. In light of the AI Act, CDSSs cannot function fully autonomously, but instead physicians are to be assigned an oversight role. It is questionable, however, whether it would always be appropriate to assign full responsibility, and consequently liability, to the physician. This would be especially true if oversight is limited to reviewing outputs generated by the CDSS in a manner that leaves no real control in the hands of the physician. Future research should aim to define clear liability allocation frameworks and design workflows that ensure effective oversight, thereby preventing unfair liability burdens.
2025
File in questo prodotto:
File Dimensione Formato  
1601.pdf

accesso aperto

Tipologia: Published (Publisher's Version of Record)
Licenza: Creative commons
Dimensione 185.58 kB
Formato Adobe PDF
185.58 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3573657
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 4
  • OpenAlex 0
social impact