: Artificial intelligence (AI) in medicine is an increasingly studied and widespread phenomenon, applied in multiple clinical settings. Alongside its many potential advantages, such as easing clinicians' workload and improving diagnostic accuracy, the use of AI raises ethical and legal concerns, to which there is still no unanimous response. A systematic literature review on medical professional liability related to the use of AI-based diagnostic algorithms was conducted using the public electronic database PubMed selecting studies published from 2020 to 2023. The systematic review was performed according to 2020 PRISMA guidelines. The literature review highlights how the issue of liability in case of AI-related error and patient's damage has received growing attention in recent years. The application of AI and diagnostic algorithm moreover raises questions about the risks of using unrepresentative populations during the development and about the completeness of information given to the patient. Concerns about the impact on the fiduciary relationship between physician and patient and on the subject of empathy have also been raised. The use of AI in medical field and the application of diagnostic algorithms introduced a revolution in the doctor-patient relationship resulting in multiple possible medico-legal consequences. The regulatory framework on medical liability when AI is applied is therefore inadequate and requires urgent intervention, as there is no single and specific regulation governing the liability of various parties involved in the AI supply chain, nor on end-users. Greater attention should be paid to inherent risk in AI and the consequent need for regulations regarding product safety as well as the maintenance of minimum safety standards through appropriate updates.

Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review

Cestonaro, Clara;Delicati, Arianna;Marcante, Beatrice;Caenazzo, Luciana;Tozzo, Pamela
2023

Abstract

: Artificial intelligence (AI) in medicine is an increasingly studied and widespread phenomenon, applied in multiple clinical settings. Alongside its many potential advantages, such as easing clinicians' workload and improving diagnostic accuracy, the use of AI raises ethical and legal concerns, to which there is still no unanimous response. A systematic literature review on medical professional liability related to the use of AI-based diagnostic algorithms was conducted using the public electronic database PubMed selecting studies published from 2020 to 2023. The systematic review was performed according to 2020 PRISMA guidelines. The literature review highlights how the issue of liability in case of AI-related error and patient's damage has received growing attention in recent years. The application of AI and diagnostic algorithm moreover raises questions about the risks of using unrepresentative populations during the development and about the completeness of information given to the patient. Concerns about the impact on the fiduciary relationship between physician and patient and on the subject of empathy have also been raised. The use of AI in medical field and the application of diagnostic algorithms introduced a revolution in the doctor-patient relationship resulting in multiple possible medico-legal consequences. The regulatory framework on medical liability when AI is applied is therefore inadequate and requires urgent intervention, as there is no single and specific regulation governing the liability of various parties involved in the AI supply chain, nor on end-users. Greater attention should be paid to inherent risk in AI and the consequent need for regulations regarding product safety as well as the maintenance of minimum safety standards through appropriate updates.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3502373
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact