Loading...
Please wait a moment
Founded by passionate advocates of learning and innovation, Learni set out to make professional training accessible to everyone, everywhere in the world. Our team works in the largest cities such as Paris, Lyon, Marseille, and internationally, to support talents and organizations in their skills development.
Which format do you prefer?
30 free minutes with a training advisor — no commitment.
Loading available slots...
Artificial Intelligence training in Mesa in September 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
No-Code / Low-Code training in Leeds in November 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
Comprehensive guide to Figma training in 2025, covering essentials to sophisticated prototyping. Ideal for designers preparing for professional growth.
Artificial Intelligence training in Raleigh in June 2026 with Learni. Certified, expert trainers, eligible for employer funding. Free quote.
The Training Apache Iceberg - Optimizing Scalable Data Lakes training is delivered in-person or remotely (blended-learning, e-learning, virtual classroom, remote in-person). At Learni, a Qualiopi-certified training organization, each program is designed to maximize skills acquisition, regardless of the training mode chosen.
The trainer alternates between demonstrative, interrogative, and active methods (through practical exercises and/or real-world scenarios). This pedagogical approach ensures concrete and directly applicable learning in the workplace.
To ensure the quality of the Training Apache Iceberg - Optimizing Scalable Data Lakes training, Learni provides the following teaching resources:
For in-house training at a location external to Learni, the client ensures and commits to having all necessary teaching materials (IT equipment, internet connection...) for the proper conduct of the training action in accordance with the prerequisites indicated in the communicated training program.
The assessment of skills acquired during the Training Apache Iceberg - Optimizing Scalable Data Lakes training is carried out through:
Learni is committed to the accessibility of its professional training programs. All our training programs are accessible to people with disabilities. Our teams are available to adapt teaching methods to your specific needs. Do not hesitate to contact us for any accommodation request.
Learni training programs are available for inter-company and intra-company settings, both in-person and remote. Registration is possible up to 48 business hours before the start of training. Our programs are eligible for OPCO, Pôle emploi, and FNE-Formation funding. Contact us to discuss your training project and funding possibilities.
Dive into Apache Iceberg's distributed architecture with a focus on 2026 evolutions, configure a Spark cluster and MinIO for Iceberg tables, explore manifests, metadata files, and ACID transactions, hands-on exercises on creating scalable data lakes, commit initial data with consistency validation, produce an architectural analysis report for your enterprise use case.
Master schema evolution without downtime using advanced ALTER TABLE, implement dynamic partitioning and merge-on-read for storage optimization, handle Parquet/ORC formats with Spark SQL, practical cases on refactoring existing data lakes, schema compatibility testing, generate partitioned materialized views, document strategies for professional data teams.
Integrate Apache Iceberg into hybrid ecosystems with Spark 4.x, Trino, and Flink for federated queries, configure Hive Metastore and REST catalogs, exercises on batch/streaming ingestion of terabytes, optimize cross-engine joins, real-world performance benchmarks on enterprise datasets, deploy unified ETL pipelines, export reproducible configurations for production.
Optimize queries on large volumes via automatic compaction, manifest rewrites, and vacuuming obsolete files, tune Spark for Iceberg with Z-ordering and sorting, analyze profilers and Prometheus metrics, exercises achieving 70% latency reduction through proactive maintenance, real data warehouse migration cases, Airflow scripts for production routines, direct ROI measurement on your workload.
Leverage time travel and snapshots for GDPR-compliant audits, implement multi-version branching and row-level security, governance with Ranger and fine-grained access, deploy resilient Kubernetes Helm clusters, final exercises on real incident rollbacks, simulate data leaks and recovery, deliver complete red-thread project with Grafana monitoring, strategic scaling plan for enterprise.
Target audience
Data engineers, data architects, enterprise big data experts for advanced skill development
Prerequisites
Expertise in Apache Spark SQL, S3/HDFS data lakes, Delta Lake, Hadoop fundamentals, and data governance
Loading...
Please wait a moment





























