Job Description

  • Design and architect scalable data platforms using Databricks and Apache Spark.
  • Lead end-to-end implementation of data lakes, lakehouse, and analytics solutions.
  • Define data engineering, security, and governance best practices.
  • Collaborate with data engineers, data scientists, and business stakeholders.
  • Optimize performance, cost, and reliability of Databricks workloads.

Requirements

  • Strong hands-on experience with Databricks, Spark, and Delta Lake.
  • Proficiency in Python/Scala and SQL for big data processing.
  • Experience with cloud platforms (AWS, Azure, or GCP).
  • Solid understanding of data architecture, ETL/ELT, and data modeling.
  • Excellent communication and solution-design skills.

Requirements

  • Strong hands-on experience with Databricks, Spark, and Delta Lake.
  • Proficiency in Python/Scala and SQL for big data processing.

Apply for this Position

Ready to join Unison Group? Click the button below to submit your application.

Submit Application