Job Description

Job Title: Data Engineer

Experience: 5+ Years

Location: Chennai/Hyderabad/Mumbai/Pune/Bangalore (Hybrid)

Key Responsibilities:

Python Proficiency:

Demonstrate a strong command of Python programming language, actively contributing to the development and maintenance of data engineering solutions.

Data Engineering Expertise:

Set up and maintain efficient data pipelines, ensuring smooth data flow and integration across systems. Experience with SQL and Data Warehouse/ Data Lakes is required. Contribute to the establishment and maintenance of Data Lakes, implementing industry best practices. Execute data scrubbing techniques and implement data validation processes to ensure data integrity and quality.

Tool and Platform Proficiency:

Experience with Data Bricks platform is required, apart from this expertise in at least one popular tool/platform within the data engineering domain will be nice to have. Stay informed about industry trends, exploring and adopting tools to optimize data engineering processes.

Collaboration and Communication:

Collaborate effectively with cross-functional teams, including data scientists, software engineers, and business analysts. Communicate technical concepts to non- technical stakeholders, fostering collaboration and innovation within the team.

Documentation and Best Practices:

Contribute to maintaining comprehensive documentation for data engineering processes, ensuring knowledge transfer within the team. Adhere to best practices in data engineering, promoting a culture of quality and efficiency.

Must-Have Skills:

Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field.

Minimum of 4 years of proven experience as a Data Engineer.

Strong proficiency in Python programming language and SQL.

Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.

Experience with Big data, Apache Spark, Hadoop

Good comprehension and critical thinking skills

Nice-to-Have Skills (Optional):

Exposure to cloud-based data platforms (AWS/Azure/GCP) and pyspark.

Familiarity with containerization tools like Docker or Kubernetes.

Interest in data visualization tools. (Tableau, PowerBI, etc.)

Certifications in relevant data engineering or machine learning technologies

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application