SC Cleared Databricks Data Engineer - Azure Cloud

Posted 1 hour 16 minutes ago by Montash

Contract
Not Specified
Other
Not Specified, United Kingdom
Job Description

Job Title: SC Cleared Databricks Data Engineer - Azure Cloud
Contract Type: Freelance/B2B
Day Rate: Up to £400 a day inside IR35
Location: Remote or hybrid (as agreed)
Start Date: January 5th 2026
Clearance required: Must be holding active SC Clearance

Overview
We are seeking an experienced Databricks Data Engineer to design, build, and optimise large-scale data workflows within the Databricks Data Intelligence Platform.

The role focuses on delivering high-performing batch and streaming pipelines using PySpark, Delta Lake, and Azure services, with additional emphasis on governance, lineage tracking, and workflow orchestration. Client information remains confidential.

Key Responsibilities

  • Build and orchestrate Databricks data pipelines using Notebooks, Jobs, and Workflows
  • Optimise Spark and Delta Lake workloads through cluster tuning, adaptive execution, scaling, and caching
  • Conduct performance benchmarking and cost optimisation across workloads
  • Implement data quality, lineage, and governance practices aligned with Unity Catalog
  • Develop PySpark-based ETL and transformation logic using modular, reusable coding standards
  • Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel
  • Integrate Databricks assets with Azure Data Lake Storage, Key Vault, and Azure Functions
  • Collaborate with cloud architects, data analysts, and engineering teams on end-to-end workflow design
  • Support automated deployment of Databricks artefacts via CI/CD pipelines
  • Maintain clear technical documentation covering architecture, performance, and governance configuration

Required Skills and Experience

  • Strong experience with the Databricks Data Intelligence Platform
  • Hands-on experience with Databricks Jobs and Workflows
  • Deep PySpark expertise, including schema management and optimisation
  • Strong understanding of Delta Lake architecture and incremental design principles
  • Proven Spark performance engineering and cluster tuning capabilities
  • Unity Catalog experience (data lineage, access policies, metadata governance)
  • Azure experience across ADLS Gen2, Key Vault, and serverless components
  • Familiarity with CI/CD deployment for Databricks
  • Solid troubleshooting skills in distributed environments

Preferred Qualifications

  • Experience working across multiple Databricks workspaces and governed catalogs
  • Knowledge of Synapse, Power BI, or related Azure analytics services
  • Understanding of cost optimisation for data compute workloads
  • Strong communication and cross-functional collaboration skills