Job Description

Job Title: Data Engineer/ Sr. Data Engineer


Experience Level: 3-6 Years 



Work Mode- Hybrid ( 2 PM to 11 PM )


Location: Bangalore/Gurgaon


Job Type: Full-Time


Company- StatusNeo



About the Role



We're looking for a Data Engineer who thrives on solving complex data challenges, building scalable pipelines, and delivering reliable insights that drive business decisions. The ideal candidate will bring hands-on expertise in cloud data platforms (AWS, GCP), Snowflake, Big Data ecosystems, and distributed computing using Spark and Scala.


Key Responsibilities




  • Design, build, and maintain data pipelines for large-scale ingestion, transformation, and integration from multiple data sources.




  • Develop and optimize ETL/ELT workflows leveraging tools such as Spark, Scala, and SQL.




  • Implement data models and data warehousing solutions on Snowflake and cloud environments (AWS/GCP).




  • Work closely with data scientists, analysts, and business stakeholders to deliver high-quality, production-ready data solutions.




  • Manage data quality, governance, and security across environments.




  • Optimize performance of data pipelines, focusing on scalability, efficiency, and cost optimization in cloud setups.




  • Collaborate with DevOps teams for CI/CD deployment and monitoring of data solutions.

  • Technical Skill Set (Must Have)




    • Programming Languages: Scala, SQL




    • Big Data Frameworks: Apache Spark, Hadoop ecosystem




    • Cloud Platforms: AWS (S3, EMR, Glue, Redshift, Lambda) and/or GCP (BigQuery, Dataflow, Dataproc, Composer)




    • Data Warehouse: Snowflake (data modeling, performance tuning, query optimization)




    • ETL Tools: Spark-based ETL or Airflow/Glue/Composer




    • Version Control & CI/CD: Git, Jenkins, or equivalent




    • Data Formats: Parquet, Avro, ORC, JSON, CSV

      Good to Have




      • Knowledge of Python for data scripting and automation.




      • Familiarity with containerization (Docker, Kubernetes).




      • Experience with data cataloging and metadata management tools.




      • Exposure to streaming frameworks (Kafka, Pub/Sub).

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application