Loading...
Please wait a moment
Founded by passionate advocates of learning and innovation, Learni set out to make professional training accessible to everyone, everywhere in the world. Our team works in the largest cities such as Paris, Lyon, Marseille, and internationally, to support talents and organizations in their skills development.
Which format do you prefer?
30 free minutes with a training advisor — no commitment.
Loading available slots...
Artificial Intelligence training in Glasgow in June 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Professional Training training in Dallas in July 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Artificial Intelligence training in Raleigh in June 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
No-Code / Low-Code training in Leeds in November 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
The Training TensorRT-LLM - Accelerate LLM Inference x10 in Production training is delivered in-person or remotely (blended-learning, e-learning, virtual classroom, remote in-person). At Learni, a Qualiopi-certified training organization, each program is designed to maximize skills acquisition, regardless of the training mode chosen.
The trainer alternates between demonstrative, interrogative, and active methods (through practical exercises and/or real-world scenarios). This pedagogical approach ensures concrete and directly applicable learning in the workplace.
To ensure the quality of the Training TensorRT-LLM - Accelerate LLM Inference x10 in Production training, Learni provides the following teaching resources:
For in-house training at a location external to Learni, the client ensures and commits to having all necessary teaching materials (IT equipment, internet connection...) for the proper conduct of the training action in accordance with the prerequisites indicated in the communicated training program.
The assessment of skills acquired during the Training TensorRT-LLM - Accelerate LLM Inference x10 in Production training is carried out through:
Learni is committed to the accessibility of its professional training programs. All our training programs are accessible to people with disabilities. Our teams are available to adapt teaching methods to your specific needs. Do not hesitate to contact us for any accommodation request.
Learni training programs are available for inter-company and intra-company settings, both in-person and remote. Registration is possible up to 48 business hours before the start of training. Our programs are eligible for OPCO, Pôle emploi, and FNE-Formation funding. Contact us to discuss your training project and funding possibilities.
Dive into expert installation of TensorRT-LLM on GPU clusters, configure Docker environments with CUDA 12+, build first TRT-LLM engines from HuggingFace models like Llama2, test real-time batching inferences, analyze profiling logs to identify initial bottlenecks, produce your first optimized deliverable ready for scaling, with exercises on real business cases to anchor professional certifying skills.
Develop custom kernels via TensorRT plugins to accelerate FlashAttention and Rotary Embeddings, fuse matrix operations GEMM with cuBLAS, benchmark gains on GPT-J and Mistral, implement INT8/FP8 quantizations to reduce VRAM memory by 50%, apply techniques to red thread projects, generate detailed performance reports, transform your skills into immediate enterprise assets through intensive practices.
Master tensor and pipeline parallelism on multi-GPU A100/H100 setups, configure NCCL for ultra-fast communications, deploy in-shard inferences for 70B+ models, test fault-tolerance and load-balancing, integrate with Kubernetes for orchestration, simulate high production loads on concrete e-commerce AI cases, produce scalable deliverable architectures, boost your professional projects with ready-to-use certifying expertise.
Design end-to-end pipelines with Triton Server for chaining TensorRT-LLM and embeddings, expose secure gRPC/REST APIs, manage token streaming and KV-cache for real-time chatbots, integrate Prometheus/Grafana monitoring, test high availability on real workloads, develop CI/CD deployment scripts, finalize red thread project with clear ROI metrics, leverage your enterprise skills through production-ready and certifying solutions.
Optimize final latencies to the extreme via advanced TensorRT-LLM profiling, implement OWASP security measures for sensitive inferences, deploy via Helm on EKS/GKE, analyze cloud vs on-prem costs with ROI calculators, conduct expert code reviews on red thread projects, prepare for professional skills certifications, depart with a complete toolbox to transform your enterprise AI infrastructures into performance leaders.
Target audience
ML Engineers, AI DevOps, expert data scientists in deep learning for advanced skill development
Prerequisites
Proficiency in PyTorch/TensorFlow, advanced CUDA, optimized C++/Python, GPU architectures
Loading...
Please wait a moment





























