Loading...
Please wait a moment
Founded by passionate advocates of learning and innovation, Learni set out to make professional training accessible to everyone, everywhere in the world. Our team works in the largest cities such as Paris, Lyon, Marseille, and internationally, to support talents and organizations in their skills development.
Which format do you prefer?
30 free minutes with a training advisor — no commitment.
Loading available slots...
Cybersecurity training in Oklahoma City in December 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Artificial Intelligence training in Raleigh in June 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Professional Training training in Dallas in July 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Artificial Intelligence training in Mesa in September 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
The Training TensorRT-LLM - Accelerate LLM Inference in Production training is delivered in-person or remotely (blended-learning, e-learning, virtual classroom, remote in-person). At Learni, a Qualiopi-certified training organization, each program is designed to maximize skills acquisition, regardless of the training mode chosen.
The trainer alternates between demonstrative, interrogative, and active methods (through practical exercises and/or real-world scenarios). This pedagogical approach ensures concrete and directly applicable learning in the workplace.
To ensure the quality of the Training TensorRT-LLM - Accelerate LLM Inference in Production training, Learni provides the following teaching resources:
For in-house training at a location external to Learni, the client ensures and commits to having all necessary teaching materials (IT equipment, internet connection...) for the proper conduct of the training action in accordance with the prerequisites indicated in the communicated training program.
The assessment of skills acquired during the Training TensorRT-LLM - Accelerate LLM Inference in Production training is carried out through:
Learni is committed to the accessibility of its professional training programs. All our training programs are accessible to people with disabilities. Our teams are available to adapt teaching methods to your specific needs. Do not hesitate to contact us for any accommodation request.
Learni training programs are available for inter-company and intra-company settings, both in-person and remote. Registration is possible up to 48 business hours before the start of training. Our programs are eligible for OPCO, Pôle emploi, and FNE-Formation funding. Contact us to discuss your training project and funding possibilities.
Discover the TensorRT-LLM architecture through a complete installation on NVIDIA GPU environments, configure PyTorch and Triton Inference Server dependencies, perform your first benchmarks on Llama and GPT-like models, analyze immediate speed gains, and produce an initial performance report with guided practical exercises to consolidate expert basics in inference optimization.
Dive into TensorRT-LLM engine compilation with FP8/INT4 quantization techniques, integrate the multi-GPU plugin for horizontal scaling, test on real datasets like MT-Bench, optimize custom kernels via TensorRT graphs, deploy in dynamic batching mode, and validate memory reductions up to 70% during intensive hands-on workshops.
Integrate TensorRT-LLM with Triton Inference Server for scalable REST/gRPC APIs, configure Kubernetes orchestration with NVIDIA Helm charts, manage load balancing and auto-scaling, simulate production loads with Locust, debug bottlenecks via NVIDIA Nsight, and deliver an enterprise-ready inference server with concrete chatbot and RAG application cases.
Push the limits with optimized TensorRT-LLM in-context learning, benchmark against vLLM and HuggingFace TGI on A100/H100 hardware, implement KV-cache compression and paged attention, analyze latency/quality trade-offs via Perplexity scores, deploy an end-to-end RAG pipeline, and finalize with a capstone project certifying your professional skills in performant AI.
Target audience
AI Engineers, deep learning data scientists, ML architects in enterprise for upskilling in inference optimization
Prerequisites
Advanced expertise in PyTorch/TensorFlow, mastery of CUDA and GPU programming, experience deploying LLMs in production
Loading...
Please wait a moment





























