Job Description

Job Description

Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.

Job Qualifications

Responsibilities:

  • Develop high-quality, scalable ETL/ELT pipelines using Databricks technologies including Delta Lake, Auto Loader, and DLT.
  • Excellent programming and debugging skills in Python.
  • Strong hands-on experience with PySpark to build efficient data transformation and validation logic.
  • Must be proficient in at least one cloud platform: AWS, GCP, or Azure.
  • Create modular dbx functions for transformation, PII masking, and validation logic — reusable across DLT and notebook pipelines.
  • Implement ingestion patterns using Auto Loader with checkpointing and schema evolution for structured and semi-structured data.
  • Build ...

Apply for this Position

Ready to join Accenture? Click the button below to submit your application.

Submit Application