Job Description

Key Responsibilities:

  • Design, build, and optimize ETL and ELT pipelines using Databricks and Apache Spark
  • Work with big data processing frameworks (PySpark, Scala, SQL) for data transformation and analytics
  • Implement Delta Lake architecture for data rel...

Apply for this Position

Ready to join Confidential? Click the button below to submit your application.

Submit Application