Generative artificial intelligence (AI) has the potential to transform Laboratory Medicine by enhancing data processing, report generation, and research productivity. Large language models (LLMs), can generate coherent text, extract critical information from complex datasets, and assist in scientific writing, significantly reducing the time and effort required for these tasks. By streamlining these processes, AI might improve the efficiency and accuracy of clinical operations and research. However, the integration of AI in healthcare raises significant challenges, including the risk of AI producing erroneous outputs, known as “hallucinations,” where the generated content is inaccurate or fabricated, highlighting the need for human oversight. In addition to automating text summarization and document management, AI is powerful for data analysis, lowering technical barriers and making advanced statistical processing more accessible to laboratory professionals. These capabilities are particularly valuable in a field where the ability to process large datasets is critical for research potentially impacting on diagnostics. Despite these benefits, ethical concerns such as potential plagiarism, automation bias, and data privacy risks remain pressing issues, given that AI relies on large datasets, including sensitive patient information. This manuscript critically examines the current state of AI in Laboratory Medicine, addressing its potential to innovate the field, while outlining the ethical and practical considerations that must be addressed to ensure its responsible implementation. The future of AI in Laboratory Medicine depends on developing robust frameworks for its use and training healthcare professionals to leverage these tools effectively, ensuring they augment rather than replace human expertise.

Intelligenza artificiale generativa nella medicina di laboratorio: innovazioni, limiti e considerazioni etiche

GALOZZI, Paola;
2025

Abstract

Generative artificial intelligence (AI) has the potential to transform Laboratory Medicine by enhancing data processing, report generation, and research productivity. Large language models (LLMs), can generate coherent text, extract critical information from complex datasets, and assist in scientific writing, significantly reducing the time and effort required for these tasks. By streamlining these processes, AI might improve the efficiency and accuracy of clinical operations and research. However, the integration of AI in healthcare raises significant challenges, including the risk of AI producing erroneous outputs, known as “hallucinations,” where the generated content is inaccurate or fabricated, highlighting the need for human oversight. In addition to automating text summarization and document management, AI is powerful for data analysis, lowering technical barriers and making advanced statistical processing more accessible to laboratory professionals. These capabilities are particularly valuable in a field where the ability to process large datasets is critical for research potentially impacting on diagnostics. Despite these benefits, ethical concerns such as potential plagiarism, automation bias, and data privacy risks remain pressing issues, given that AI relies on large datasets, including sensitive patient information. This manuscript critically examines the current state of AI in Laboratory Medicine, addressing its potential to innovate the field, while outlining the ethical and practical considerations that must be addressed to ensure its responsible implementation. The future of AI in Laboratory Medicine depends on developing robust frameworks for its use and training healthcare professionals to leverage these tools effectively, ensuring they augment rather than replace human expertise.
2025
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3549118
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex 0
social impact