Job Description

  • Develop and implement efficient data pipelines using Apache Spark (PySpark preferred) to process and analyze large-scale data.
  • Design, build, and optimize complex SQL queries to extract, transform, and load (ETL) data from multiple sources.
  • Orchestrate data w...

Apply for this Position

Ready to join Confidential? Click the button below to submit your application.

Submit Application