Loading...
Please wait a moment
Founded by passionate advocates of learning and innovation, Learni set out to make professional training accessible to everyone, everywhere in the world. Our team works in the largest cities such as Paris, Lyon, Marseille, and internationally, to support talents and organizations in their skills development.
Which format do you prefer?
30 free minutes with a training advisor — no commitment.
Loading available slots...
Professional Training training in Memphis in October 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Explore Learni's cutting-edge data visualization courses launching in April 2026, featuring AI-driven tools, VR simulations, and real-world projects for professionals.
Cybersecurity training in Sheffield in November 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Artificial Intelligence training in San Francisco in October 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
The Training AI Red Teaming - Securing Adversarial AI Models training is delivered in-person or remotely (blended-learning, e-learning, virtual classroom, remote in-person). At Learni, a Qualiopi-certified training organization, each program is designed to maximize skills acquisition, regardless of the training mode chosen.
The trainer alternates between demonstrative, interrogative, and active methods (through practical exercises and/or real-world scenarios). This pedagogical approach ensures concrete and directly applicable learning in the workplace.
To ensure the quality of the Training AI Red Teaming - Securing Adversarial AI Models training, Learni provides the following teaching resources:
For in-house training at a location external to Learni, the client ensures and commits to having all necessary teaching materials (IT equipment, internet connection...) for the proper conduct of the training action in accordance with the prerequisites indicated in the communicated training program.
The assessment of skills acquired during the Training AI Red Teaming - Securing Adversarial AI Models training is carried out through:
Learni is committed to the accessibility of its professional training programs. All our training programs are accessible to people with disabilities. Our teams are available to adapt teaching methods to your specific needs. Do not hesitate to contact us for any accommodation request.
Learni training programs are available for inter-company and intra-company settings, both in-person and remote. Registration is possible up to 48 business hours before the start of training. Our programs are eligible for OPCO, Pôle emploi, and FNE-Formation funding. Contact us to discuss your training project and funding possibilities.
Dive into the principles of red teaming applied to AI, install environments like CleverHans and Foolbox to simulate threats, analyze concrete cases of attacks on image classification models, perform your first adversarial example tests on real datasets, produce an initial vulnerability report with practical recommendations to strengthen professional training.
Explore the most sophisticated AI red teaming attacks, generate imperceptible perturbations using algorithms like Fast Gradient Sign Method and Projected Gradient Descent, test on NLP and computer vision models in enterprise conditions, refine attacks through interactive exercises on Jupyter Notebooks, evaluate the business impact of discovered vulnerabilities for immediate securing.
Learn to counter threats using defensive AI red teaming techniques, implement adversarial training on your TensorFlow or PyTorch models, deploy anomaly detectors with isolation forests and autoencoders, simulate critical enterprise scenarios like AI fraud, produce optimized deliverables to certify resilience, transform weaknesses into sustainable competitive advantages.
Culminate with a complete team-based AI red teaming simulation, attack and defend a deployed production AI model, use tools like the ART library for comprehensive audits, draft a professional report with success metrics and a securing roadmap, receive personalized expert feedback, leave with certifying skills ready for the enterprise.
Target audience
AI Engineers, cybersecurity experts, corporate data scientists seeking to upskill
Prerequisites
Knowledge of machine learning, Python, and cybersecurity basics
Loading...
Please wait a moment





























