Job Description

Key Responsibilities

Data Engineering

  • Design, develop, and maintain scalable ETL pipelines and data workflows using Databricks and Apache Spark
  • Build and optimize batch and streaming data pipelines to ensure performance, reliability, and data quality
  • Work with structured and unstructured data using SQL, Python, and NoSQL databases
  • Implement data transformations using Delta Lake for efficient storage, versioning, and retrieval

Databricks Platform Engineering

  • Develop and manage Databricks workspaces, clusters, notebooks, and workflows
  • Demonstrate strong understanding of Databricks components including:
  • Databricks SQL
  • Delta Lake
  • Databricks Runtime
  • Databricks Workflows
  • Optimize cluster performance, a...

Apply for this Position

Ready to join Persolkelly India? Click the button below to submit your application.

Submit Application