Job Description

Job Description – Databricks Engineer / Associate Manager / Manager

Experience Range: 6–12 Years

Locations: Pune, Bangalore, Chennai, Gurgaon, Hyderabad, Kolkata

Role Levels: Databricks Engineer / Associate Manager – Data Engineering / Manager – Data Engineering

Notice period – Immedidate to 20 Days ONLY


Role Overview

We are looking for experienced Data Engineering professionals with strong expertise in Databricks, Apache Spark, and cloud-based data platforms. The role involves building scalable data pipelines, optimizing big-data workloads, implementing data governance, and driving best practices across Data Engineering teams.


Key Responsibilities

For Databricks Engineer (6–9 Years)

  • Design, develop, and maintain ETL/ELT pipelines using Databricks (Python/Scala/Spark SQL).
  • Work extensively on Delta Lake, including ACID transactions, schema evolution, time-travel, and performance optimizations.
  • Build and optimize workflows using Databricks Workflows, DLT, and Unity Catalog.
  • Configure and optimize Databricks clusters and Spark jobs for performance and cost efficiency.
  • Work with cloud storage systems (S3/ADLS/GCS), IAM/RBAC, and networking/security components.
  • Implement Medallion Architecture (Bronze → Silver → Gold).

For Associate Manager – Data Engineering (8–10 Years)

All responsibilities of Engineer plus:

  • Lead small teams in delivering Databricks-based data solutions.
  • Own end-to-end design of data pipelines and distributed systems.
  • Implement DevOps/CI-CD practices using Git, Azure DevOps, Terraform, Databricks CLI.
  • Ensure data governance compliance, security controls, and best engineering practices.

For Manager – Data Engineering (10–12 Years)

All responsibilities of Associate Manager plus:

  • Drive architectural decision-making for large-scale data platforms on Databricks.
  • Partner with business, product, analytics, and data science teams to align solutions with business goals.
  • Lead multiple project teams, mentor engineers, and ensure delivery excellence.
  • Define long-term data engineering roadmap, standards, and best practices.


Core Technical Skills (All Roles)

  • Databricks Platform Expertise: Workspace, notebooks, DLT, Workflows, Unity Catalog.
  • Apache Spark: PySpark, Spark SQL, Scala/Java for performance tuning.
  • Delta Lake: ACID transactions, schema enforcement, Z-ordering, optimization.
  • Programming: Python and/or Scala; SQL for analytics and data validation.
  • Cloud Platforms: AWS / Azure / GCP — storage, IAM, networking, security.
  • ETL/Data Architecture: Batch & streaming pipelines, Medallion Architecture.
  • Performance Optimization: Debugging data skew, memory issues, cluster tuning.
  • DevOps & CI/CD: Azure DevOps, Git, Terraform, Databricks CLI.
  • Security & Governance: Row-level/column-level security, encryption, access controls.


Soft Skills & Leadership Competencies

  • Strong analytical and problem-solving ability.
  • Effective communication with stakeholders, analysts, and cross-functional teams.
  • Ability to mentor juniors and enable team growth (Associate Manager/Manager).
  • Strategic thinking with the ability to influence design and technology decisions.
  • Ownership mindset with strong execution focus.


Preferred Certifications

  • Databricks Data Engineer Associate / Professional
  • Databricks Lakehouse Fundamentals
  • Cloud certifications: AWS/Azure/GCP (Associate or above)

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application