Job Description

Key Responsibilities:

  • Design, build, and optimize ETL and ELT pipelines using Databricks and Apache Spark
  • Work with big data processing frameworks (PySpark, Scala, SQL) for data transformation and analytics
  • Implement Delta Lake architecture for data reliability, ACID transactions, and schema evolution
  • Integrate Databricks with cloud services like Azure Data Lake, AWS S3, GCP BigQuery, and Snowflake
  • Develop and maintain data models, data lakes, and data warehouse solutions
  • Optimize Spark performance tuning, job scheduling, and cluster configurations
  • Work with Azure Synapse, AWS Glue, or GCP Dataflow to enable seamless data integration
  • Implement CI/CD automation for data pipelines using Azure DevOps, GitHub Actions, or Jenkins
  • Perform data quality checks, validation, and governance using Databricks Unity Catalog
  • Collaborate with data scientists, analysts, and business teams to support a...

Apply for this Position

Ready to join Hanker Systems India? Click the button below to submit your application.

Submit Application