Job Description

Title: Data Engineer
Location: Bangalore, HYD, Noida
Exp: 4-8 yrs

Job Description:

Key Responsibilities

Design, develop, and maintain scalable ETL/ELT pipelines using Databricks, PySpark, and Airflow.

Build and optimize data models and data lakes/warehouses on AWS

Implement best practices for data quality, data governance, and performance optimization.

Collaborate with cross-functional teams (data scientists, analysts, product, and business teams) to deliver data-driven solutions.

Ensure reliability, scalability, and efficiency of data workflows through automation and monitoring.

Troubleshoot complex data engineering issues and optimize processing performance.


Required Skills & Qualifications

Bachelor's/Master's degree in Computer Science, Engineering, or related field.

4+ years of hands-on experience in Data Engineering.

Strong expertise in PySpark, Databricks, and Airflow for large-scale data processing and orchestration.

Solid experience with AWS services such as S3, Glue, Redshift, EMR, Lambda, and IAM.

Strong knowledge of SQL and performance tuning.

Experience with CI/CD pipelines, Git, and containerization (Docker/Kubernetes) is a plus.

Strong problem-solving skills, communication, and ability to work in a fast-paced environment.

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application