Job Description

Senior Data Engineer with AWS


Key Responsibilities:

·      Design and  implement robust ETL pipelines using AWS Glue, Lambda, and S3.

·      Monitor and optimize the performance of data workflows and batch processing jobs.

·      Troubleshoot and resolve issues related to data pipeline failures, inconsistencies, and performance bottlenecks.

·      Collaborate with cross-functional teams to define data requirements and ensure data quality and accuracy.

·      Develop and maintain automated solutions for data transformation, migration, and integration tasks.

·      Implement best practices for data security, data governance, and compliance within AWS environments.

·      Continuously improve and optimize AWS Glue jobs, Lambda functions, and S3 storage management.

·      Maintain comprehensive documentation for data pipeline architecture, job schedules, and issue resolution processes.

Required Skills and Experience:

·      Strong experience with Data Engineering practices.

·      Experience in AWS services, particularly AWS Glue, Lambda, S3, and other AWS data tools.

·      Proficiency in SQL, python , Pyspark, numpy etc and experience in working with large-scale data sets.

·      Experience in designing and implementing ETL pipelines in cloud environments.

·      Expertise in troubleshooting and optimizing data processing workflows.

·      Familiarity with data warehousing concepts and cloud-native data architecture.

·      Knowledge of automation and orchestration tools in a cloud-based environment.

·      Strong problem-solving skills and the ability to debug and improve the performance of data jobs.

·      Excellent communication skills and the ability to work collaboratively with cross-functional teams.

·      Good to have knowledge of DBT & Snowflake

Preferred Qualifications:


·      Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or a related field.

·      Experience with other AWS data services like Redshift, Athena, or Kinesis.

·      Familiarity with Python or other scripting languages for data engineering tasks.

·      Experience with containerization and orchestration tools like Docker or Kubernetes.

·       

Location: Candidates should be based in Gurgaon/Hyderabad


Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application