Job Description

Role : GCP Data Engineer

Required Technical Skill Set: GCP, PySpark, Data proc, HDFS, Hadoop, SQL


Job Desciption:


  • Build, maintain, and troubleshoot pipelines using GCP services (e.G., DataProc, Pub/Sub, Cloud Functions, Cloud Composer/Apache Airflow) to ingest, transform, and load data from various sources (relational databases, APIs, streaming data, flat files).
  • Implement batch data processing solutions.
  • Identify and resolve data-related issues, including data quality problems, pipeline failures, and performance bottlenecks.
  • Sound programming knowledge on PySpark & SQL in terms of processing large amount of semi structured & unstructured data
  • Working Knowledge on working with Avro, Parquet format files
  • Knowledge on working on Hadoop Big Data platform and ecosystem

Good-to-Have

  • Knowledge on Jira, Agile, Sonar, Team city & CICD
  • Any exposure / experience for an ...

Apply for this Position

Ready to join Tata Consultancy Services? Click the button below to submit your application.

Submit Application