Job Description
Role 2: Data Engineer (Junior)
Experience: 2–3 Years
Location - Bangalore
Role Summary
We are looking for a Junior Data Engineer to support the development and maintenance of modern data pipelines on Azure. The role is ideal for engineers who are strong in Spark and SQL fundamentals and want to grow in enterprise-scale data engineering and governance.
Key Responsibilities
- Develop and maintain data pipelines using Azure Databricks
- Build and support ADF pipelines for data ingestion and transformation
- Implement data transformations using Spark and SQL
- Work with metadata-driven frameworks under guidance from senior engineers
- Support Unity Catalog implementation for basic data governance and access controls
- Assist in data validation, quality checks, and pipeline monitoring
- Follow data engineering best practices for performance, reliability, and maintainability
- Collaborate with senior engineers and pod leads to deliver project milestones
Required Skills & Qualifications
- 2–3 years of hands-on experience in Data Engineering
- Working experience with Azure Databricks
- Basic understanding of Unity Catalog concepts
- Hands-on experience with Azure Data Factory (ADF) pipelines
- Strong fundamentals in Spark and SQL
- Understanding of data engineering best practices (partitioning, joins, optimization basics)
- Ability to troubleshoot data pipeline issues
Good to Have
- Exposure to metadata-driven or configuration-based data frameworks
- Basic knowledge of data modeling concepts
- Familiarity with version control and CI/CD pipelines
Experience: 2–3 Years
Location - Bangalore
Role Summary
We are looking for a Junior Data Engineer to support the development and maintenance of modern data pipelines on Azure. The role is ideal for engineers who are strong in Spark and SQL fundamentals and want to grow in enterprise-scale data engineering and governance.
Key Responsibilities
- Develop and maintain data pipelines using Azure Databricks
- Build and support ADF pipelines for data ingestion and transformation
- Implement data transformations using Spark and SQL
- Work with metadata-driven frameworks under guidance from senior engineers
- Support Unity Catalog implementation for basic data governance and access controls
- Assist in data validation, quality checks, and pipeline monitoring
- Follow data engineering best practices for performance, reliability, and maintainability
- Collaborate with senior engineers and pod leads to deliver project milestones
Required Skills & Qualifications
- 2–3 years of hands-on experience in Data Engineering
- Working experience with Azure Databricks
- Basic understanding of Unity Catalog concepts
- Hands-on experience with Azure Data Factory (ADF) pipelines
- Strong fundamentals in Spark and SQL
- Understanding of data engineering best practices (partitioning, joins, optimization basics)
- Ability to troubleshoot data pipeline issues
Good to Have
- Exposure to metadata-driven or configuration-based data frameworks
- Basic knowledge of data modeling concepts
- Familiarity with version control and CI/CD pipelines
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application