Job Description
Job Title - Staff / Senior Data Engineer
Job Location - Pune, Maharashtra
Experience - 4 to 10 years
What You’ll Do:
In this role, you will take the lead in designing, building, and optimizing data pipelines, ensuring high-quality data integration and real-time analytics. As a Senior Staff Engineer, you will mentor and guide a team of data engineers, collaborate with cross-functional teams, and drive best practices for data architecture and engineering.
- Leadership & Mentorship:
o Lead and mentor a team of engineers, ensuring technical excellence and continuous improvement.
o Provide guidance on complex technical issues, and foster a culture of collaboration and innovation.
o Develop and maintain data engineering processes, practices, and standards across the team.
- Data Architecture & Engineering:
o Design and develop robust, scalable, and efficient data pipelines for ETL (Extract, Transform, Load) processes.
o Architect data solutions and ensure seamless integration of data from diverse sources, both on- premise and in the cloud.
o Optimize data storage, retrieval, and processing for performance and scalability.
- Collaboration & Stakeholder Management:
o Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions that meet business needs.
o Collaborate with the software engineering team to ensure smooth integration of data pipelines into applications.
- Data Quality & Governance:
o Ensure data integrity, quality, and consistency across systems.
o Implement and enforce data governance policies and best practices, ensuring compliance with data privacy regulations.
- Continuous Improvement:
o Stay updated with the latest trends in data engineering, big data technologies, and cloud platforms.
o Drive automation, testing, and optimization within data pipelines to improve overall efficiency.
- Technology Stack:
o Lead the selection and implementation of appropriate technologies for data storage, processing, and retrieval (e.G., Hadoop, Spark, Kafka, Airflow).
o Experience working with public clouds like GCP/AWS.
o Proficient with SQL, Java, Spring boot, Python or at least one Object oriented language, Bash.
o Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc.
Who You Are:
- Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field (or equivalent work experience).
- 8+ years of experience in data engineering or related roles, with at least 2 years in a leadership or senior technical position.
- Extensive experience with data warehousing, ETL pipelines, data modeling, and data architecture.
- Strong experience working with large-scale data processing frameworks (e.G., Hadoop, Apache Spark, Kafka).
- Proficient in SQL, Python, Java, or other relevant programming languages.
- Hands-on experience with cloud-based platforms (AWS, GCP, Azure).
- Familiarity with data orchestration tools like Apache Airflow or similar.
- Strong knowledge of databases (SQL and NoSQL) and data storage technologies.
- Excellent problem-solving and analytical skills.
- Strong communication and interpersonal skills, with the ability to work effectively with both technical and non-technical teams.
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application