In recent years, we have witnessed an increase in the amount of published research in the field of Explainable Recommender Systems. These systems are designed to help users find the items of the most interest by providing not only suggestions but also the reasons behind those recommendations. Research has shown that there are many advantages to complementing a recommendation with a convincing explanation. For example, such an explanation can often lead to an increase in user trust, which in turn can improve recommendation effectiveness and system adoption. In particular, for this reason, many research works are studying explainable recommendation algorithms based on graphs, e.g., exploiting knowledge graphs or graph neural networks based methods. The use of graphs is very promising since algorithms can, in principle, combine the benefits of personalization and graph reasoning, thus potentially improving the effectiveness of both recommendations and explanations. However, although graph-based algorithms have been repeatedly shown to bring improvements in terms of ranking quality, not much literature has yet studied how to properly evaluate the quality of the corresponding explanations. In this position paper, we focus on this problem, examining in detail how the explanations of explainable recommenders based on graphs are currently evaluated and discussing how they could be evaluated in the future in a more quantitative and comparable way in compliance with the well-known Explainable Recommender Systems guidelines.

Graph-based Explainable Recommendation Systems: Are We Rigorously Evaluating Explanations? A Position Paper

Andrea Montagna
;
Alvise De Biasio;Nicolo' Navarin;Fabio Aiolli
2023

Abstract

In recent years, we have witnessed an increase in the amount of published research in the field of Explainable Recommender Systems. These systems are designed to help users find the items of the most interest by providing not only suggestions but also the reasons behind those recommendations. Research has shown that there are many advantages to complementing a recommendation with a convincing explanation. For example, such an explanation can often lead to an increase in user trust, which in turn can improve recommendation effectiveness and system adoption. In particular, for this reason, many research works are studying explainable recommendation algorithms based on graphs, e.g., exploiting knowledge graphs or graph neural networks based methods. The use of graphs is very promising since algorithms can, in principle, combine the benefits of personalization and graph reasoning, thus potentially improving the effectiveness of both recommendations and explanations. However, although graph-based algorithms have been repeatedly shown to bring improvements in terms of ranking quality, not much literature has yet studied how to properly evaluate the quality of the corresponding explanations. In this position paper, we focus on this problem, examining in detail how the explanations of explainable recommenders based on graphs are currently evaluated and discussing how they could be evaluated in the future in a more quantitative and comparable way in compliance with the well-known Explainable Recommender Systems guidelines.
2023
CEUR Workshop Proceedings
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3500982
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact