SAIDA — Security of AI for Defense Applications

SAIDA targets the AID “Fiabilité de l’intelligence artificielle, vulnérabilités et contre-mesures” chair. It aims at  establishing the fundamental principles for designing reliable and secure AI systems: a reliable AI maintains its good performance even under uncertainties; a secure AI resists attacks in hostile environments.

Reliability and security are challenged at training and at test time. SAIDA therefore studies core issues in relation with poisoning training data, stealing the parameters of the model or inferring sensitive training from information leaks. Additionally, SAIDA targets uncovering the fundamentals of attacks and defenses engaging AI at test time. Three converging research directionsmake SAIDA:

  1. theoretical investigations grounded in statistics and applied mathematics to discover the underpinnings of reliability and security,
  2. connects adversarial sampling and Information Forensics and Security,
  3. protecting the training data and the AI system.

 

SAIDA thus combines theoretical investigations with more applied and heuristic studies to guarantee the applicability of the findings as well as the ability to cope with real world settings.

Funding: ANR-20-CHIA-011-01
Contact: Teddy Furon

Dates: 2020-2024
Teddy Furon (Principal Investigator)Researchers: Laurent Amsaleg (Linkmedia, CNRS), Erwan Le Merrer (WIDE, Inria), Mathias Rousset (SIMSMART, Inria), Patrick Bas (CRIStAL, CNRS)Postdoc:

  • Kassem Kallas (Inria): DNN watermarking, Backdoor attacks

Phd students:

  • Benoit Bonnet (Inria): White-box adversarial attacks
  • Thibault Maho (Inria): Black-box adversarial attacks and DNN fingerprinting
  • Samuel Tap (ZAMA): Machine Learning in the encrypted domain
  • Karim Tit (Thales): Robustness to uncertainties

Interns:

  • Gautier Evennou: Physical attacks against face recognition
  • Maxime Minard: Membership inference attack
  • Paul Chaurand: White-box adversarial attack