Loading...
Please wait a moment
Founded by passionate advocates of learning and innovation, Learni set out to make professional training accessible to everyone, everywhere in the world. Our team works in the largest cities such as Paris, Lyon, Marseille, and internationally, to support talents and organizations in their skills development.
Which format do you prefer?
30 free minutes with a training advisor — no commitment.
Loading available slots...
Explore the future of asynchronous communication training for distributed teams. Discover strategies, tools, and trends shaping effective collaboration across time zones by May 2026.
Cybersecurity training in Brighton in July 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Artificial Intelligence training in San Francisco in October 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
No-Code / Low-Code training in Leeds in November 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
The Training TensorRT-LLM - Accelerate LLM Inference in Production training is delivered in-person or remotely (blended-learning, e-learning, virtual classroom, remote in-person). At Learni, a Qualiopi-certified training organization, each program is designed to maximize skills acquisition, regardless of the training mode chosen.
The trainer alternates between demonstrative, interrogative, and active methods (through practical exercises and/or real-world scenarios). This pedagogical approach ensures concrete and directly applicable learning in the workplace.
To ensure the quality of the Training TensorRT-LLM - Accelerate LLM Inference in Production training, Learni provides the following teaching resources:
For in-house training at a location external to Learni, the client ensures and commits to having all necessary teaching materials (IT equipment, internet connection...) for the proper conduct of the training action in accordance with the prerequisites indicated in the communicated training program.
The assessment of skills acquired during the Training TensorRT-LLM - Accelerate LLM Inference in Production training is carried out through:
Learni is committed to the accessibility of its professional training programs. All our training programs are accessible to people with disabilities. Our teams are available to adapt teaching methods to your specific needs. Do not hesitate to contact us for any accommodation request.
Learni training programs are available for inter-company and intra-company settings, both in-person and remote. Registration is possible up to 48 business hours before the start of training. Our programs are eligible for OPCO, Pôle emploi, and FNE-Formation funding. Contact us to discuss your training project and funding possibilities.
Quick installation of the TensorRT-LLM environment on NVIDIA GPUs via official Docker containers, hands-on with build tools to convert Hugging Face models into optimized engines, practical exercises on Llama and Mistral with FP8 and AWQ quantization, creation of your first functional engine, real-time latency tests, and performance gain analysis for concrete enterprise use cases.
Exploration of TensorRT-LLM's automatic optimization passes such as GEMM fusion and multi-head attention, manual implementation of plugins for custom KV cache, exercises on LLMs from 7B to 70B parameters with paging and prefix caching, precise acceleration measurements via TensorRT benchmarks, development of a reusable build script for professional projects, and group validation of deliverables.
Integration of TensorRT-LLM with Triton Inference Server for scalable serving, multi-GPU configuration with tensor parallelism and pipeline parallelism, implementation of dynamic batching and in-flight batching to handle variable requests, practical cases on enterprise chatbots with token streaming, GPU resource optimization using NVIDIA DCGM tools, and production of actionable monitoring metrics.
Deployment in Kubernetes clusters using Helm charts for TensorRT-LLM, securing endpoints with TLS and rate limiting, CI/CD integration via GitHub Actions for automated engine rebuilds, resolution of real-world issues like OOM and performance drift, final red thread project on a customized LLM in simulated production, delivery of performance reports, and action plan for certified enterprise skills.
Target audience
Machine learning engineers, AI developers, AI DevOps seeking to upskill in professional optimization
Prerequisites
Mastery of Python, PyTorch or TensorFlow, basics in LLMs and CUDA
Loading...
Please wait a moment





























