Robustness guarantees are essential for deploying machine learning models in security-critical environments where adversarial attacks pose a serious threat. While extensive progress has been made in certifying (deep) neural networks, nonparametric models such as k-Nearest Neighbors (k-NN) have been less investigated, despite their interpretability and usage in high-assurance settings. Prior certification methods for k-NN provide sound but incomplete guarantees, leaving many genuinely robust inputs uncertified. This work introduces a sound and complete certification framework for k-NN classifiers, offering exact robustness guarantees against adversarial perturbations. Our approach combines hypercube space decomposition with a novel graph-theoretic analysis based on an adversarial proximity precedence graph, enabling full coverage of adversarial regions. Extensive evaluation on widely used datasets demonstrates that our exact methodology significantly improves certification rates over existing techniques while maintaining scalability. By closing the gap between soundness and completeness, our framework advances the security guarantees of k-NN models and contributes to the broader goal of provably robust machine learning in adversarial settings.

Exact Robustness Certification of k-Nearest Neighbors

Ranzato, Francesco
;
Shakeel, Ahmad;Zanella, Marco
2025

Abstract

Robustness guarantees are essential for deploying machine learning models in security-critical environments where adversarial attacks pose a serious threat. While extensive progress has been made in certifying (deep) neural networks, nonparametric models such as k-Nearest Neighbors (k-NN) have been less investigated, despite their interpretability and usage in high-assurance settings. Prior certification methods for k-NN provide sound but incomplete guarantees, leaving many genuinely robust inputs uncertified. This work introduces a sound and complete certification framework for k-NN classifiers, offering exact robustness guarantees against adversarial perturbations. Our approach combines hypercube space decomposition with a novel graph-theoretic analysis based on an adversarial proximity precedence graph, enabling full coverage of adversarial regions. Extensive evaluation on widely used datasets demonstrates that our exact methodology significantly improves certification rates over existing techniques while maintaining scalability. By closing the gap between soundness and completeness, our framework advances the security guarantees of k-NN models and contributes to the broader goal of provably robust machine learning in adversarial settings.
2025
CCS '25: Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security
2025 ACM SIGSAC Conference on Computer and Communications Security (CCS 2025)
File in questo prodotto:
File Dimensione Formato  
3719027.3765140.pdf

accesso aperto

Descrizione: published version
Tipologia: Published (Publisher's Version of Record)
Licenza: Creative commons
Dimensione 1.25 MB
Formato Adobe PDF
1.25 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3568619
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex 0
social impact