Job Description

About the Role

We are seeking a highly skilled and hands-on Senior Data Engineer to join our EDH initiative. This role demands deep expertise in building scalable data pipelines and automation frameworks using Databricks and the Azure ecosystem. The ideal candidate will have a strong engineering mindset, leadership experience, and a passion for solving complex data challenges.

Key Responsibilities

  • Design, develop, and optimize scalable data pipelines using Databricks and Apache Spark
  • Implement job orchestration and automation using tools like Azure Data Factory, Azure Functions, and DevOps
  • Collaborate with cross-functional teams to understand data requirements and deliver high-quality solutions
  • Lead and mentor junior engineers, ensuring best practices in coding, testing, and deployment
  • Manage and monitor data workflows, ensuring reliability and performance
  • Contribute to architectural decisions and help shape the future of the EDH platform

Required Qualifications

  • 5–7 years of hands-on experience in Databricks and Spark-based data engineering
  • Strong proficiency in the Azure tech stack: Azure Data Lake, Azure Data Factory, Azure Synapse, Azure Functions, etc.
  • Proven experience in job automation, CI/CD pipelines, and workflow orchestration
  • Solid understanding of data modeling, ETL/ELT processes, and performance tuning
  • Experience leading small teams or mentoring engineers in a fast-paced environment
  • Excellent problem-solving and communication skills

Preferred Qualifications

  • Experience with Delta Lake, Unity Catalog, and MLflow
  • Familiarity with data governance and security best practices in cloud environments
  • Exposure to Agile methodologies and DevOps practices

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application