Job Description

Hiring AWS Data Engineer for global consulting firm.


Experience: 4+years

Location: Kochi, Noida, Bangalore Pune

Skills:

System Design & architecture understanding, PySpark, Python, SQL, Git, Github, AWS

Services – Glue, Lambda, Step Functions, S3, Athena.


Job Description:


We are seeking a talented and skilled Senior AWS Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS Services, to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. Candidate will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics.


Responsibilities:

1. Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently.

2. Collaborate with analysts to understand data requirements and ensure data availability and quality. Candidate should have good understanding of project architecture to make necessary changes as required.

3. Ability to write highly optimize SQL queries for data extraction, transformation, and loading.

4. Utilize Git for version control, ensuring proper documentation and tracking of code changes.

5. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval.

6. Develop and optimize high-performance, scalable databases using Amazon DynamoDB.

7. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations.

8. Automate workflows using AWS Cloud services like event bridge, step functions.

9. Monitor and optimize data processing workflows for performance and scalability.

10. Troubleshoot data-related issues and provide timely resolution.

11. Stay up-to-date with industry best practices and emerging technologies in data engineering.


Qualifications:

1. Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree is a plus.

2. Strong proficiency in PySpark and Python for data processing and analysis.

3. Proficiency in SQL for data manipulation and querying.

4. Experience with version control systems, preferably Git.

5. Expertise with AWS services, including S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc.

6. Familiarity with Databricks and it’s concepts.

7. Excellent problem-solving skills and attention to detail.

8. Strong communication and collaboration skills to work effectively within a team.

9. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment.


Preferred Skills:

1. Knowledge of data warehousing concepts and data modeling.

2. Familiarity with big data technologies like Hadoop and Spark.

3. AWS certifications related to data engineering.

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application