Job Description

The opportunity

We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team.

Your key responsibilities

  • Develop and deploy data lakehouse pipelines in a cloud environment using AWS and Databricks services.

  • Design and implement end-to-end ETL/ELT workflows using Databricks (PySpark, Delta Lake, DLT) and AWS native services such as S3, Glue, Lambda, and Step Functions.

  • Migrate existing on-premises ETL workloads to the AWS + Databricks platform with optimized performance and cost efficiency.

  • Interact with business and technical stakeholders, understand their data goals, and translate them into scalable, governed solutions.

  • Design, optimize, and monitor Spark jobs and SQL models for faster execution and h...
  • Apply for this Position

    Ready to join EY? Click the button below to submit your application.

    Submit Application