Loading...
Please wait a moment
Founded by passionate advocates of learning and innovation, Learni set out to make professional training accessible to everyone, everywhere in the world. Our team works in the largest cities such as Paris, Lyon, Marseille, and internationally, to support talents and organizations in their skills development.
Which format do you prefer?
30 free minutes with a training advisor — no commitment.
Loading available slots...
Professional Training training in New York in September 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Artificial Intelligence training in Mesa in September 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Professional Training training in Fort Worth in July 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Professional Training training in Memphis in October 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
The Training AI Red Teaming - Securing AI Models Against Adversaries training is delivered in-person or remotely (blended-learning, e-learning, virtual classroom, remote in-person). At Learni, a Qualiopi-certified training organization, each program is designed to maximize skills acquisition, regardless of the training mode chosen.
The trainer alternates between demonstrative, interrogative, and active methods (through practical exercises and/or real-world scenarios). This pedagogical approach ensures concrete and directly applicable learning in the workplace.
To ensure the quality of the Training AI Red Teaming - Securing AI Models Against Adversaries training, Learni provides the following teaching resources:
For in-house training at a location external to Learni, the client ensures and commits to having all necessary teaching materials (IT equipment, internet connection...) for the proper conduct of the training action in accordance with the prerequisites indicated in the communicated training program.
The assessment of skills acquired during the Training AI Red Teaming - Securing AI Models Against Adversaries training is carried out through:
Learni is committed to the accessibility of its professional training programs. All our training programs are accessible to people with disabilities. Our teams are available to adapt teaching methods to your specific needs. Do not hesitate to contact us for any accommodation request.
Learni training programs are available for inter-company and intra-company settings, both in-person and remote. Registration is possible up to 48 business hours before the start of training. Our programs are eligible for OPCO, Pôle emploi, and FNE-Formation funding. Contact us to discuss your training project and funding possibilities.
Dive into the principles of AI red teaming, evaluating model robustness via basic attacks like FGSM and PGD, install and configure dedicated frameworks such as ART and Foolbox for quick tests, analyze real business cases of jailbreaks on ChatGPT, perform your first practical exercises on modified ImageNet datasets, produce an initial vulnerability report with immediate recommendations to secure your professional AI projects.
Explore sophisticated AI red teaming techniques, injecting malicious prompts to bypass safeguards in LLMs like GPT-4, simulate transfer attacks between models to expose hidden weaknesses, use tools like Garak and PromptInject for automated scans, test backdoors on neural networks via poisoned datasets, apply these methods to real enterprise use cases in finance and healthcare, generate concrete deliverables including proofs-of-concept and effectiveness metrics to validate your certifying skills.
Master countermeasures in AI red teaming, implementing adversarial training to harden models against persistent attacks, audit complete systems with CI/CD pipelines integrating red team tests, deploy tools like RobustBench to benchmark resilience, analyze real incidents such as data leaks at tech giants, finalize a red thread project on your personal AI model with executive report and Qualiopi certification plan, leave with skills ready for professional enterprise audits.
Target audience
AI experts, pentesters, security engineers, and cybersecurity managers seeking to upskill
Prerequisites
Basics in machine learning, Python, and application cybersecurity concepts
Loading...
Please wait a moment





























