Job Description
Job Description– Big Data Engineer
Experience: 8+ Years
Location: Chennai & Gurgaon
Mode: Hybrid
We are looking for a Big Data Engineer with strong experience in SQL, Hive, ETL pipelines, PySpark, and GCP to design and build scalable data solutions for large, complex datasets.
Key Responsibilities
- Develop and optimize Big Data pipelines using SQL, Hive, PySpark, and ETL frameworks.
- Build and maintain scalable data solutions on GCP (BigQuery, BigTable, Dataflow, Dataproc, etc.).
- Design and implement data models for analytical and operational systems.
- Work with diverse storage systems — relational, NoSQL, document, column-family, and graph databases.
- Ensure high performance, reliability, data quality, and secure data management.
- Optimize SQL queries and improve performance across distributed systems.
- Collaborate with cross-functional teams following Agile/Scrum methodologies.
Required Skills
- Strong hands-on experience with SQL , Hive , PySpark/Python , and Big Data ETL .
- Experience with GCP Big Data services (BigQuery, Dataflow, BigTable, etc.).
- Knowledge of NoSQL systems (HBase, Cassandra, MongoDB, etc.).
- Understanding of distributed systems, data storage formats, and performance tuning.
- Experience with data modeling tools (ErWin, ER/Studio).
- Familiarity with data management concepts: replication, partitioning, encryption, high availability.
Nice to Have
- Experience with OLTP/OLAP systems.
- Understanding of infrastructure for performance tuning (e.g., Nutanix, network-attached storage).
- Exposure to Java or other programming languages.
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application