Job Description
Roles & Responsibilities:
- Develop distributed data pipelines using PySpark on Databricks for ingesting, transforming, and publishing master data
- Write optimized SQL for large-scale data processing, including complex joins, window functions, and CTEs for MDM log...
Apply for this Position
Ready to join Confidential? Click the button below to submit your application.
Submit Application