Loading...
Please wait a moment
Founded by passionate advocates of learning and innovation, Learni set out to make professional training accessible to everyone, everywhere in the world. Our team works in the largest cities such as Paris, Lyon, Marseille, and internationally, to support talents and organizations in their skills development.
Which format do you prefer?
30 free minutes with a training advisor — no commitment.
Loading available slots...
Professional Training training in Dallas in July 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Professional Training training in Tucson in December 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Artificial Intelligence training in Raleigh in June 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Artificial Intelligence training in San Francisco in October 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
The Training TensorRT-LLM - Optimizing LLM Inference in Production training is delivered in-person or remotely (blended-learning, e-learning, virtual classroom, remote in-person). At Learni, a Qualiopi-certified training organization, each program is designed to maximize skills acquisition, regardless of the training mode chosen.
The trainer alternates between demonstrative, interrogative, and active methods (through practical exercises and/or real-world scenarios). This pedagogical approach ensures concrete and directly applicable learning in the workplace.
To ensure the quality of the Training TensorRT-LLM - Optimizing LLM Inference in Production training, Learni provides the following teaching resources:
For in-house training at a location external to Learni, the client ensures and commits to having all necessary teaching materials (IT equipment, internet connection...) for the proper conduct of the training action in accordance with the prerequisites indicated in the communicated training program.
The assessment of skills acquired during the Training TensorRT-LLM - Optimizing LLM Inference in Production training is carried out through:
Learni is committed to the accessibility of its professional training programs. All our training programs are accessible to people with disabilities. Our teams are available to adapt teaching methods to your specific needs. Do not hesitate to contact us for any accommodation request.
Learni training programs are available for inter-company and intra-company settings, both in-person and remote. Registration is possible up to 48 business hours before the start of training. Our programs are eligible for OPCO, Pôle emploi, and FNE-Formation funding. Contact us to discuss your training project and funding possibilities.
Complete installation of the TensorRT-LLM environment on NVIDIA GPU, hands-on with Docker and CUDA Toolkit tools to create a first engine from a HuggingFace model like Llama-2, practical exercises on HF to TensorRT conversion, verification of initial performance with integrated benchmarks, generation of latency and throughput reports, setup of an ongoing enterprise LLM project to measure immediate gains.
Exploration of INT4/INT8/FP8 quantization techniques in TensorRT-LLM, automatic fusion of attention and MLP layers to reduce GPU memory, development of custom GEMM plugins to accelerate matrix calculations, practical cases on models like Mistral with hyperparameter tuning, profiling with Nsight Systems to identify bottlenecks, production of an optimized engine delivering up to 3x speedup, documentation of technical choices for production replication.
Integration of TensorRT-LLM with NVIDIA Triton Inference Server for multi-model deployments, management of KV cache for long-contextual streaming inferences, configuration of dynamic batching and tensor parallelism on GPU clusters, exercises on horizontal scaling with Kubernetes, load testing with Locust to simulate 1000+ requests/second, resolution of real enterprise error cases, deployment of a secure production-ready REST API service.
In-depth performance analysis with NVIDIA Nsight Compute and DCGM to tune TensorRT-LLM kernels, multi-GPU optimization with tensor and pipeline parallelism on 70B parameter LLMs, resolution of challenges like Out-Of-Memory via intelligent paging and swapping, workshops on concrete enterprise cases like scalable chatbots, finalization of the ongoing project with live Q&A, evaluation of final gains (latency <50ms, 70% cost reduction), delivery of reusable certifiable templates.
Target audience
Machine learning engineers, AI DevOps, and AI architects in companies seeking to upskill in LLM optimization
Prerequisites
Expertise in PyTorch/TensorFlow, CUDA programming, and deployment of LLM models such as Llama or GPT
Loading...
Please wait a moment





























