SAIDA targets the AID “Fiabilité de l’intelligence artificielle, vulnérabilités et contre-mesures” chair. It aims at establishing the fundamental principles for designing reliable and secure AI systems: a reliable AI maintains its good performance even under uncertainties; a secure AI resists attacks in hostile environments.
Reliability and security are challenged at training and at test time. SAIDA therefore studies core issues in relation with poisoning training data, stealing the parameters of the model or inferring sensitive training from information leaks. Additionally, SAIDA targets uncovering the fundamentals of attacks and defenses engaging AI at test time. Three converging research directionsmake SAIDA:
- theoretical investigations grounded in statistics and applied mathematics to discover the underpinnings of reliability and security,
- connects adversarial sampling and Information Forensics and Security,
- protecting the training data and the AI system.
SAIDA thus combines theoretical investigations with more applied and heuristic studies to guarantee the applicability of the findings as well as the ability to cope with real world settings.
Contact: Teddy Furon
Teddy Furon (Principal Investigator)Researchers: Laurent Amsaleg (Linkmedia, CNRS), Erwan Le Merrer (WIDE, Inria), Mathias Rousset (SIMSMART, Inria), Patrick Bas (CRIStAL, CNRS)Postdoc: