In this paper, we describe the methodology and experimental analysis of a twofold strategy for the retrieval of medical relevant information: a ranking fusion and a query reformulation approach. In particular, the query reformulation approach is based on the idea that a query is composed of two parts: the primary term and the secondary term of the query, and that these two parts can be substituted with alternative terms to create a reformulation of the original query. The goal of our work is to evaluate the performances of a search engine over 1) manual query variants; 2) different retrieval functions; 3) w/out pseudo-relevance feedback; 4) reciprocal ranking fusion. We describe the experiments based on the CLEF eHealth 2021 Consumer Health Search Task dataset. The results show that 1) a ranking fusion approach of the baseline models improves MAP significantly; 2) manual query variants open new questions about possible an unintentional bias in the pool of documents that were selected for relevance assessment.

Did I Miss Anything? A Study on Ranking Fusion and Manual Query Rewriting in Consumer Health Search

Di Nunzio G. M.;Vezzani F.
2022

Abstract

In this paper, we describe the methodology and experimental analysis of a twofold strategy for the retrieval of medical relevant information: a ranking fusion and a query reformulation approach. In particular, the query reformulation approach is based on the idea that a query is composed of two parts: the primary term and the secondary term of the query, and that these two parts can be substituted with alternative terms to create a reformulation of the original query. The goal of our work is to evaluate the performances of a search engine over 1) manual query variants; 2) different retrieval functions; 3) w/out pseudo-relevance feedback; 4) reciprocal ranking fusion. We describe the experiments based on the CLEF eHealth 2021 Consumer Health Search Task dataset. The results show that 1) a ranking fusion approach of the baseline models improves MAP significantly; 2) manual query variants open new questions about possible an unintentional bias in the pool of documents that were selected for relevance assessment.
2022
13th International Conference of the Cross-Language Evaluation Forum for European Languages, CLEF 2022
978-3-031-13642-9
978-3-031-13643-6
File in questo prodotto:
File Dimensione Formato  
dinunzio_vezzani_clef2022.pdf

non disponibili

Tipologia: Published (publisher's version)
Licenza: Accesso privato - non pubblico
Dimensione 1.34 MB
Formato Adobe PDF
1.34 MB Adobe PDF Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3457817
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact