Job Description
Key Responsibilities:
- Design, implement, and optimize data pipelines using Apache Spark and Databricks to ingest, process, and transform large-scale structured and unstructured datasets.
- Develop, schedule, and monitor ETL workflows and DAGs using orchestration ...
Apply for this Position
Ready to join Confidential? Click the button below to submit your application.
Submit Application