Cultural heritage (CH) buildings may suffer damage due to aging, and computer vision can help detect and aid protection measures. However, damage segmentation models for CH buildings still face challenges such as large parameter sizes, low computational efficiency, and limited model portability. This paper proposes a real-time embedded system to segment damage in CH building images based on a lightweight neural network and knowledge distillation. Firstly, an improved YOLOv8n-Ghost model is established, which incorporates the Ghost module and a pruning method to construct a lightweight network and reduce model redundancy while maintaining detection accuracy and segmentation performance. Secondly, a channel-wise knowledge distillation method is applied to enable the student model to learn from the teacher model and improve accuracy without increasing the number of network parameters. Finally, a CH building dataset including seven types of damage in CH buildings is constructed, and the established dataset is used to train and validate the deep learning model. Experimental results demonstrate that the proposed damage segmentation model, trained on CH building images, achieves an average precision of 0.824 and can process a 512 × 512 image in 0.27 s (204 FPS). Fine-tuning effectively restores accuracy and reduces the model size to 2.94 MB after pruning. Moreover, knowledge distillation further enhances feature extraction ability, enabling accurate and real-time segmentation of various damage types, making the model suitable for UAV-based CH building inspections. Two case studies were conducted on a communal building and a Renaissance building in Padova, Italy, confirming the effectiveness of the trained algorithm. The proposed model was successfully deployed on an Android device, demonstrating accurate damage segmentation with high adaptability and efficient on-device processing capabilities.

Efficient on-device damage segmentation for cultural heritage using pruning and knowledge distillation

Xiaoyu Liu;Francesca da Porto;Elisa Saler;Marco Dona'
2026

Abstract

Cultural heritage (CH) buildings may suffer damage due to aging, and computer vision can help detect and aid protection measures. However, damage segmentation models for CH buildings still face challenges such as large parameter sizes, low computational efficiency, and limited model portability. This paper proposes a real-time embedded system to segment damage in CH building images based on a lightweight neural network and knowledge distillation. Firstly, an improved YOLOv8n-Ghost model is established, which incorporates the Ghost module and a pruning method to construct a lightweight network and reduce model redundancy while maintaining detection accuracy and segmentation performance. Secondly, a channel-wise knowledge distillation method is applied to enable the student model to learn from the teacher model and improve accuracy without increasing the number of network parameters. Finally, a CH building dataset including seven types of damage in CH buildings is constructed, and the established dataset is used to train and validate the deep learning model. Experimental results demonstrate that the proposed damage segmentation model, trained on CH building images, achieves an average precision of 0.824 and can process a 512 × 512 image in 0.27 s (204 FPS). Fine-tuning effectively restores accuracy and reduces the model size to 2.94 MB after pruning. Moreover, knowledge distillation further enhances feature extraction ability, enabling accurate and real-time segmentation of various damage types, making the model suitable for UAV-based CH building inspections. Two case studies were conducted on a communal building and a Renaissance building in Padova, Italy, confirming the effectiveness of the trained algorithm. The proposed model was successfully deployed on an Android device, demonstrating accurate damage segmentation with high adaptability and efficient on-device processing capabilities.
2026
   Guangdong Basic and Applied Basic Research Foundation

   National Natural Science Foundation of China

   China Postdoctoral Science Foundation

   University of Padua and Guangzhou University

   Alliance of Guangzhou International Sister City Universities (GISU)

   University of Padua
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3576700
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
  • OpenAlex 2
social impact