In this paper, we present an overview of the “Task 2: Complexity Spotting, Identifying and explaining difficult concepts” within the context of the Automatic Simplification of Scientific Texts (SimpleText) lab, run as part of CLEF 2024. The primary objective of the SimpleText lab is to advance the accessibility of scientific information by facilitating automatic text simplification, thereby promoting a more inclusive approach to scientific knowledge dissemination. Task 2 focuses on complexity spotting within scientific texts (passage). Thus, the goal is to detect the terms/concepts that require specific background knowledge for understanding the passage, assess their complexity for non-experts, and provide explanations for these detected difficult concepts. A total of 39 submissions were received for this task, originating from 12 distinct teams. In this paper, we describe the data collection process, task configuration, and evaluation methodology employed. Additionally, we provide a brief summary of the various approaches adopted by the participating teams.

Overview of the CLEF 2024 SimpleText Task 2: Identify and Explain Difficult Concepts

Di Nunzio G. M.
;
Vezzani F.;Bonato V.;
2024

Abstract

In this paper, we present an overview of the “Task 2: Complexity Spotting, Identifying and explaining difficult concepts” within the context of the Automatic Simplification of Scientific Texts (SimpleText) lab, run as part of CLEF 2024. The primary objective of the SimpleText lab is to advance the accessibility of scientific information by facilitating automatic text simplification, thereby promoting a more inclusive approach to scientific knowledge dissemination. Task 2 focuses on complexity spotting within scientific texts (passage). Thus, the goal is to detect the terms/concepts that require specific background knowledge for understanding the passage, assess their complexity for non-experts, and provide explanations for these detected difficult concepts. A total of 39 submissions were received for this task, originating from 12 distinct teams. In this paper, we describe the data collection process, task configuration, and evaluation methodology employed. Additionally, we provide a brief summary of the various approaches adopted by the participating teams.
2024
Electronic
Inglese
Inglese
CEUR Workshop Proceedings
3740
3129
3146
18
CEUR-WS
25th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF 2024
2024
fra
Computer Science & Engineering
Language & Linguistics
automatic text simplification; background knowledge; contextualization; science popularization; scientific article; term difficulty; terminology
ITALIA
FRANCIA
PAESI BASSI
273
Di Nunzio, G. M.; Vezzani, F.; Bonato, V.; Azarbonyad, H.; Kamps, J.; Ermakova, L.
6
open
info:eu-repo/semantics/conferenceObject
04 CONTRIBUTO IN ATTO DI CONVEGNO::04.01 - Contributo in atti di convegno
File in questo prodotto:
File Dimensione Formato  
Nunzio et al. - 2024 - Overview of the CLEF 2024 SimpleText Task 2 Ident.pdf

accesso aperto

Tipologia: Published (Publisher's Version of Record)
Licenza: Creative commons
Dimensione 1.39 MB
Formato Adobe PDF
1.39 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3542148
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact