Job Description

Job Description– Big Data Engineer

Experience: 8+ Years

Location: Chennai & Gurgaon

Mode: Hybrid


We are looking for a Big Data Engineer with strong experience in SQL, Hive, ETL pipelines, PySpark, and GCP to design and build scalable data solutions for large, complex datasets.

Key Responsibilities

  • Develop and optimize Big Data pipelines using SQL, Hive, PySpark, and ETL frameworks.
  • Build and maintain scalable data solutions on GCP (BigQuery, BigTable, Dataflow, Dataproc, etc.).
  • Design and implement data models for analytical and operational systems.
  • Work with diverse storage systems — relational, NoSQL, document, column-family, and graph databases.
  • Ensure high performance, reliability, data quality, and secure data management.
  • Optimize SQL queries and improve performance across distributed systems.

Apply for this Position

Ready to join BCforward? Click the button below to submit your application.

Submit Application