Job Description
Position Name : Senior Data Engineer – Databricks
Location : Pune / Hyderabad / Bengaluru
Experience : Min 6 to 10 years
About Jade Global
Jade Global is a premier global IT services and solutions firm committed to helping clients drive business transformation through enterprise applications, AI/ML, cloud platforms, automation, and managed services. As a trusted partner for industry leaders across Hi-Tech, Healthcare, Manufacturing, and Financial Services, we bring deep domain knowledge, agility, and a relentless focus on client success.
About Position
We are seeking a highly skilled and experienced Senior Data Engineer – Databricks to join our Data Engineering team. The ideal candidate will have strong hands-on expertise in Databricks, Apache Spark, Delta Lake, and cloud-based data platforms . You will be responsible for building scalable, high-performance data pipelines and enabling advanced analytics to support data-driven decision-making across the organization.
Key Responsibilities:
- Design, develop, and implement scalable ETL/ELT data pipelines using Apache Spark on Databricks .
- Build, optimize, and manage data lakes and data warehouses using Delta Lake and related technologies.
- Develop, schedule, and monitor workflows using Databricks Workflows, Airflow , or other orchestration tools.
- Collaborate with data scientists, analysts, and business stakeholders to understand requirements and deliver robust data solutions.
- Tune and optimize Spark jobs for performance, scalability, and cost efficiency.
- Implement CI/CD pipelines and follow DevOps best practices for data engineering projects.
- Ensure data security, governance, and compliance using tools such as Unity Catalog .
- Troubleshoot and resolve issues related to data ingestion, transformation, and integration .
- Participate in data platform migration initiatives , including migrations from:
- On-premise or Hadoop platforms
- Legacy ETL tools
- Other cloud or data warehouse platforms to Databricks
- Mentor junior engineers and contribute to technical excellence, best practices, and innovation within the team.
Required Skills & Experience:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field .
- 6+ years of hands-on experience in Data Engineering / Big Data development .
- 3+ years of strong experience with Databricks , including Apache Spark, Delta Lake, and MLflow .
- Strong programming skills in Python and SQL .
- Solid understanding of distributed systems and Spark performance tuning .
- Hands-on experience with cloud platforms: Azure, AWS, or GCP (Azure Databricks preferred).
- Experience working with structured, semi-structured, and unstructured data .
- Familiarity with DevOps practices , including:
- Git-based version control
- CI/CD pipelines
- Infrastructure as Code (Terraform, ARM templates)
- Strong problem-solving abilities and capability to explain complex technical concepts to non-technical stakeholders .
- Proven experience in Databricks migration projects is highly desirable.
Behavioral & Soft Skills:
- Ability to thrive in a fast-paced, client-facing environment with high delivery expectations.
- Strong leadership and interpersonal skills.
- Eager to contribute to a collaborative, team-oriented culture .
- Excellent prioritization and multitasking skills with a proven track record of meeting deadlines.
- Creative and analytical approach to problem-solving.
- Strong verbal and written communication skills.
- Adaptable to new technologies, environments, and processes.
- Comfortable managing ambiguity and solving undefined problems.
Why Jade Global?
- Award-winning enterprise partner (Oracle, Salesforce, ServiceNow, Boomi, and more)
- Diverse global culture with over 2,000 employees
- High-growth environment with opportunities for innovation and leadership
- Hybrid work flexibility and inclusive leadership culture
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application