Loading...
Please wait a moment
Founded by passionate advocates of learning and innovation, Learni set out to make professional training accessible to everyone, everywhere in the world. Our team works in the largest cities such as Paris, Lyon, Marseille, and internationally, to support talents and organizations in their skills development.
10 spots per session maximum — 10 already taken
Which format do you prefer?
30 free minutes with a training advisor — no commitment.
Loading available slots...
Explore the projected return on investment from no-code training programs for businesses by March 2026, including cost savings, productivity gains, and real-world case studies.
Discover the best sports management training options starting in March 2026, essential skills, trends, and preparation tips for aspiring managers entering the dynamic sports industry.
Discover essential Slack training strategies to enhance team communication and boost productivity ahead of March 2026. Learn best practices, future trends, and implementation tips for remote and hybrid teams.
Professional Training training in New York in September 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Don't let this gap widen
Without mastery of ONNX Runtime optimization, ML inference in production suffers from crippling latency spikes and resource inefficiency, crippling scalability.
Unoptimized deployments consume 60-80% excess compute resources, costing enterprises an average of $750,000 annually in cloud bills for high-volume workloads.
Over 55% of production ML incidents trace back to inference bottlenecks, leading to downtime that erodes 15-25% of potential revenue and undermines stakeholder trust.
Every quarter without these skills, teams risk career stagnation amid rising AI demands and competitive obsolescence.
The Training ONNX Runtime - Optimizing ML Inference in Production training is delivered in-person or remotely (blended-learning, e-learning, virtual classroom, remote in-person). At Learni, a Qualiopi-certified training organization, each program is designed to maximize skills acquisition, regardless of the training mode chosen.
The trainer alternates between demonstrative, interrogative, and active methods (through practical exercises and/or real-world scenarios). This pedagogical approach ensures concrete and directly applicable learning in the workplace.
To ensure the quality of the Training ONNX Runtime - Optimizing ML Inference in Production training, Learni provides the following teaching resources:
For in-house training at a location external to Learni, the client ensures and commits to having all necessary teaching materials (IT equipment, internet connection...) for the proper conduct of the training action in accordance with the prerequisites indicated in the communicated training program.
The assessment of skills acquired during the Training ONNX Runtime - Optimizing ML Inference in Production training is carried out through:
Learni is committed to the accessibility of its professional training programs. All our training programs are accessible to people with disabilities. Our teams are available to adapt teaching methods to your specific needs. Do not hesitate to contact us for any accommodation request.
Learni training programs are available for inter-company and intra-company settings, both in-person and remote. Registration is possible up to 48 business hours before the start of training. Our programs are eligible for OPCO, Pôle emploi, and FNE-Formation funding. Contact us to discuss your training project and funding possibilities.
Advanced ONNX Runtime configuration. Conversion of PyTorch/TensorFlow models to optimized ONNX. Management of graphs and custom operators. Inference execution on CPU, GPU, CUDA. Integration of providers (TensorRT, DirectML). Dynamic sessions and caching. Real-world performance benchmarks. Practical exercises on enterprise cases. Capstone project: scalable ML model deployment. Integrated profiling tools. MLOps best practices for production. Hands-on edge computing inference. Memory and latency optimization. High-availability load testing. Open-source code support. Preparation for AWS/Azure cloud deployment.
Target audience
Data scientists, ML engineers, MLOps architects seeking professional skill development
Prerequisites
Mastery of Python, PyTorch/TensorFlow, ONNX basics, ML model optimization
Loading...
Please wait a moment





























