The increasing deployment of artificial intelligence (AI) and machine learning systems in critical real-world domains has highlighted the urgent need for interpretability to enable humans to understand the rationale behind model decisions. This thesis addresses the ill-defined and multifaceted challenge of interpretability in AI. Interpretable models are essential for fostering trust, enhancing safety, supporting debugging and auditing, and ultimately increasing the adoption rate of advanced technologies. The research investigates three primary paradigms: (i) rule set-based methods, which offer high interpretability and explicit knowledge representation but lack scalability and adaptability to continuous data; (ii) counterfactual methods, which excel in handling continuous data but are limited to explaining individual predictions; (iii) domain-driven interpretability approaches, which aim to develop interpretable tools alongside the target application. The work demonstrates the design and implementation of interpretable and trustworthy AI systems through a combination of symbolic reasoning, neural learning, and their hybridization. Contributions include novel methodologies for counterfactual explanations, neuro-symbolic integration, and systematic evaluation of interpretability in practical applications. Multiple peer-reviewed publications in major journals and international conferences attest to the impact and maturity of the research. Collectively, this thesis lays the groundwork for building AI systems where interpretability, trustworthiness, and real-world utility are not trade-offs, but co-existing, mutually reinforcing properties.

Symbolic and Neuro-Symbolic Approaches for interpretability in Machine Learning with Applications to Medical Imaging / Bergamin, Luca. - (2026 Mar 31).

Symbolic and Neuro-Symbolic Approaches for interpretability in Machine Learning with Applications to Medical Imaging

BERGAMIN, LUCA
2026

Abstract

The increasing deployment of artificial intelligence (AI) and machine learning systems in critical real-world domains has highlighted the urgent need for interpretability to enable humans to understand the rationale behind model decisions. This thesis addresses the ill-defined and multifaceted challenge of interpretability in AI. Interpretable models are essential for fostering trust, enhancing safety, supporting debugging and auditing, and ultimately increasing the adoption rate of advanced technologies. The research investigates three primary paradigms: (i) rule set-based methods, which offer high interpretability and explicit knowledge representation but lack scalability and adaptability to continuous data; (ii) counterfactual methods, which excel in handling continuous data but are limited to explaining individual predictions; (iii) domain-driven interpretability approaches, which aim to develop interpretable tools alongside the target application. The work demonstrates the design and implementation of interpretable and trustworthy AI systems through a combination of symbolic reasoning, neural learning, and their hybridization. Contributions include novel methodologies for counterfactual explanations, neuro-symbolic integration, and systematic evaluation of interpretability in practical applications. Multiple peer-reviewed publications in major journals and international conferences attest to the impact and maturity of the research. Collectively, this thesis lays the groundwork for building AI systems where interpretability, trustworthiness, and real-world utility are not trade-offs, but co-existing, mutually reinforcing properties.
Symbolic and Neuro-Symbolic Approaches for interpretability in Machine Learning with Applications to Medical Imaging
31-mar-2026
Symbolic and Neuro-Symbolic Approaches for interpretability in Machine Learning with Applications to Medical Imaging / Bergamin, Luca. - (2026 Mar 31).
File in questo prodotto:
File Dimensione Formato  
tesi_definitiva_Luca_Bergamin.pdf

accesso aperto

Descrizione: tesi_definitiva_Luca_Bergamin
Tipologia: Tesi di dottorato
Dimensione 9.33 MB
Formato Adobe PDF
9.33 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3591820
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact