Job Description

Job Description

Roles & Responsibilities 

  • Work on Azure Databricks pipelines—enhance, modify, and optimize existing pipelines and develop new ones when required. 

  • Handle data migration from SAP/SQL Server and enterprise databases into Azure cloud environments (Lakehouse, Delta, Synapse). 

  • Configure, manage, and optimize Databricks clusters, compute, workspaces, and resources for performance and cost efficiency. 

  • Build scalable ETL/ELT processes using PySpark, SQL, Delta Lake, and medallion architecture across Bronze–Silver–Gold layers. 

  • Work extensively on Azure Data Factory, orchestration pipelines, triggers, monitoring, and integration with Databricks. 

  • Design and maintain data warehouse & dimensional models (Facts, Dimensions, SCD, Star/Snowflake schema). 

  • Ensure strong data quality, validation, governance, and security—including Unity Catalog, RBAC, and Azure platform controls. 

  • Develop solutions independently, involving end-to-end development, not just support—initially 100% development-focused. 

  • Skill Set Required 

  • Azure Databricks (Notebooks, Clusters, Delta Lake, Optimization, Resource Configuration) 

  • Azure Data Lake / Lakehouse Architecture 

  • Azure Data Factory (Pipelines, Orchestration, Integration, Monitoring) 

  • Azure SQL / SQL Server 

  • PySpark, SQL for transformation and performance tuning 

  • Data Warehouse Fundamentals 

  • Star & Snowflake schema 

  • Fact & Dimension modelling 

  • SCD, Data Marts 

  • Azure Cloud Services (Key Vault, Storage, Synapse basics, RBAC, security) 

  • ETL/ELT development (end-to-end implementation) 

  • Check Your Resume for Match

    Upload your resume and our tool will compare it to the requirements for this job like recruiters do.

    Apply for this Position

    Ready to join ? Click the button below to submit your application.

    Submit Application