Recently, 3-D scanning data are becoming increasingly important for precision livestock farming. In particular, RGB-D (red, green, blue – depth) data of livestock have come to play a critical role in the field of livestock body measurement. However, the latest livestock pose normalisation methods rely on purely 3-D geometric data and are therefore prone to errors due to noise and missing data. To achieve adequate performance, particularly for different livestock species in practical applications a 2-D/3-D fusion-based robust livestock pose normalisation method is proposed. Firstly, based on advanced 2-D object detection techniques, the proposed approach makes the best use of 2-D information to determine the accurate orientation of livestock in 3-D. Secondly, the 2-D detection results are used to generate frustums in 3-D space to locate livestock targets, which markedly reduces the search space and improves segmentation. Finally, based on a bilateral symmetry-based pose normalisation framework, a more robust pose normalisation algorithm is applied. Compared to existing pose normalization methods that operate in 3-D, extensive experiments with multiple view RGB-D data of livestock show that the proposed method is more robust and practical than existing methods. The proposed algorithm provides pose normalisation in an automatic body measurement system for livestock. This study proposes that the idea that 2-D/3-D fusion-based strategies in 3-D should be explored in more detail, particularly for cases in which the 3-D input captured by consumer designed RGB-D cameras which are often noisy and miss values at certain pixels. All the training databases and codes used in the study can be downloaded freely.

2-D/3-D fusion-based robust pose normalisation of 3-D livestock from multiple RGB-D cameras

Marinello F.;Pezzuolo A.
2022

Abstract

Recently, 3-D scanning data are becoming increasingly important for precision livestock farming. In particular, RGB-D (red, green, blue – depth) data of livestock have come to play a critical role in the field of livestock body measurement. However, the latest livestock pose normalisation methods rely on purely 3-D geometric data and are therefore prone to errors due to noise and missing data. To achieve adequate performance, particularly for different livestock species in practical applications a 2-D/3-D fusion-based robust livestock pose normalisation method is proposed. Firstly, based on advanced 2-D object detection techniques, the proposed approach makes the best use of 2-D information to determine the accurate orientation of livestock in 3-D. Secondly, the 2-D detection results are used to generate frustums in 3-D space to locate livestock targets, which markedly reduces the search space and improves segmentation. Finally, based on a bilateral symmetry-based pose normalisation framework, a more robust pose normalisation algorithm is applied. Compared to existing pose normalization methods that operate in 3-D, extensive experiments with multiple view RGB-D data of livestock show that the proposed method is more robust and practical than existing methods. The proposed algorithm provides pose normalisation in an automatic body measurement system for livestock. This study proposes that the idea that 2-D/3-D fusion-based strategies in 3-D should be explored in more detail, particularly for cases in which the 3-D input captured by consumer designed RGB-D cameras which are often noisy and miss values at certain pixels. All the training databases and codes used in the study can be downloaded freely.
File in questo prodotto:
File Dimensione Formato  
Paper.pdf

non disponibili

Descrizione: Articolo in press
Tipologia: Postprint (accepted version)
Licenza: Accesso privato - non pubblico
Dimensione 3.05 MB
Formato Adobe PDF
3.05 MB Adobe PDF Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3417226
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 8
social impact