The effective deployment of smart service systems within homes, workspaces and cities, requires gaining context and situational awareness to take action when changes are detected. To this end, sound classification systems are widely adopted and integrated into several smart devices to continuously monitor the environment. However, sound classification algorithms are prone to adversarial attacks that pose a considerable security threat to smart service systems where they are integrated. In this paper, we devise HIJACK, a novel machine learning framework entailing five neural network strategies to enforce the robustness of sound classification systems to adversarial noise injection. The HIJACK methodologies can be applied to any neural network-based sound classifier and consist of tailored transformations of the input audio during training along with specific additional layers added to the neural network architecture. To assess the noise robustness provided by the HIJACK strategies, we design a measure based on a L-2-adversarial attack to sound classification identified as the normalized fast gradient method (NFGM) - that constructs the adversarial noise by maximizing the sound mis-classification probability. We assessed the robustness of HIJACK to the proposed NFGM attack on a publicly available dataset. The results show that the combination of the five HIJACK strategies allows reaching robustness to adversarial noise 58 times larger than state-of-the-art neural networks for sound classification, guaranteeing a classification accuracy above 83%.

HIJACK: Learning-based Strategies for Sound Classification Robustness to Adversarial Noise

Meneghello, F
2023

Abstract

The effective deployment of smart service systems within homes, workspaces and cities, requires gaining context and situational awareness to take action when changes are detected. To this end, sound classification systems are widely adopted and integrated into several smart devices to continuously monitor the environment. However, sound classification algorithms are prone to adversarial attacks that pose a considerable security threat to smart service systems where they are integrated. In this paper, we devise HIJACK, a novel machine learning framework entailing five neural network strategies to enforce the robustness of sound classification systems to adversarial noise injection. The HIJACK methodologies can be applied to any neural network-based sound classifier and consist of tailored transformations of the input audio during training along with specific additional layers added to the neural network architecture. To assess the noise robustness provided by the HIJACK strategies, we design a measure based on a L-2-adversarial attack to sound classification identified as the normalized fast gradient method (NFGM) - that constructs the adversarial noise by maximizing the sound mis-classification probability. We assessed the robustness of HIJACK to the proposed NFGM attack on a publicly available dataset. The results show that the combination of the five HIJACK strategies allows reaching robustness to adversarial noise 58 times larger than state-of-the-art neural networks for sound classification, guaranteeing a classification accuracy above 83%.
2023
IEEE International Conference on Smart Computing (SMARTCOMP)
979-8-3503-2281-1
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3498561
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact