Job Description

Key Responsibilities:

  • Build scalable ETL pipelines and implement robust data solutions using Azure technologies.
  • Manage and orchestrate workflows with Azure Data Factory (ADF), Databricks, ADLS Gen2, and Azure Key Vault.
  • Design, maintain, and optimize secure and efficient data lake architectures.
  • Collaborate with stakeholders to gather requirements and translate them into detailed technical specifications.
  • Implement CI/CD pipelines to enable automated, seamless data deployment leveraging Azure DevOps.
  • Monitor and troubleshoot data quality, performance bottlenecks, and scalability issues in production pipelines.
  • Write clean, modular, and reusable PySpark code adhering to Agile development methodologies.
  • Maintain thorough documentation of data pipelines, architecture designs, and best practices for team reuse.

Must-Have Skills:

  • 6+ years of experience in Data ...

Apply for this Position

Ready to join Suzva Software Technologies? Click the button below to submit your application.

Submit Application