Leave us your email address and we'll send you all the new jobs according to your preferences.
Big Data Engineer (Spark)
Posted 4 minutes 14 seconds ago by Nebul
About Nebul
At Nebul, we're building Europe's sovereign AI cloud - trusted, secure, and purpose-built for the next generation of intelligent infrastructure.
We're looking for an Apache Spark Specialist to take a leading role in designing, deploying, and managing Nebul's accelerated, fully-managed Spark proposition - enabling our customers to extract value from data at scale with top-tier performance and security.
You will be responsible for architecting Spark environments optimized for GPU acceleration and distributed compute, ensuring seamless integration with Nebul's sovereign cloud stack. This role combines deep technical ownership with solution innovation: from deploying high-performance clusters, to shaping the product experience and supporting early customer implementations.
This is not a maintenance role - it is an opportunity to build and evolve a flagship data capability from the ground up.
What You'll Do
- Architect, deploy, and operate scalable Apache Spark environments on Nebul's sovereign AI cloud.
- Design and optimize Spark workloads for GPU-accelerated and distributed performance.
- Define and implement best practices for security, monitoring, governance, and data protection.
- Partner closely with product, engineering, and customer teams to shape our managed Spark offering.
- Evaluate and integrate complementary technologies (e.g., Delta Lake, Lakehouse components, tooling).
- Support early customer pilots and translate feedback into roadmap improvements.
- Develop automation and CI/CD deployment models to ensure reliability, repeatability, and efficiency.
- Document architectures, operational procedures, and performance benchmarks.
What We're Looking For
- 4-7 years of experience working with Apache Spark in production environments.
- Strong deep-dive knowledge of Spark internals: performance tuning, partition strategies, caching, and shuffle management.
- Hands-on deployment experience in Kubernetes , cloud infrastructure, or on-prem clusters.
- Solid understanding of distributed data platforms (e.g., Databricks, EMR, Hadoop, Lakehouse architectures).
- Strong scripting and automation skills (Python / Scala preferred).
- Ability to translate client needs into technical architectures and operational models.
- Familiarity with cloud-security principles and infrastructure-as-code practices.
Bonus Points
- Experience with GPU acceleration for Spark , RAPIDS, or ML workloads.
- Exposure to high-security or regulated environments (government, critical industry).
- Knowledge of observability stacks (Prometheus, Grafana) and orchestration (Airflow, Argo).
- Contributions to open-source or performance engineering work.
- Background in data engineering or MLOps.
Why Nebul
- Build a sovereign European alternative to hyperscalers - with real impact.
- Lead the creation of a key product capability powering AI and data innovation.
- Collaborate with world-class engineers across cloud, security, and high-performance compute.
- Hybrid environment near The Hague and Amsterdam.
- Competitive compensation, growth opportunities, and equity participation in a critical market.
Requirements
- Based in the Netherlands (commutable to Leiden).
- Valid EU work permit (no sponsorship currently available).
- Fluent in English (Dutch not required).
Ready to build the future of accelerated data processing in Europe's sovereign AI cloud?
Apply now through Frank Poll and help Nebul deliver the fastest, most secure managed Spark platform in Europe.
Nebul
Related Jobs
Business Developer Specialist
- Noord-Brabant, Tilburg, Netherlands, 5011 AA
Lean Specialist(Transportation)
- Zuid-Holland, Rotterdam, Netherlands
IT Senior Product Owner
- Noord-Holland, Amsterdam, Netherlands
Civil Engineer
- £40,000 Annual
- South Glamorgan, Cardiff, United Kingdom
Warehouse Site Lead (all genders) Bleiswijk-NL
- Zuid-Holland, Bleiswijk, Netherlands, 2665 AA