Job Description

Job Title: PySpark, Python and AWS


Location: Kochi - Direct Face to Face - walkin

Experience: 5 to 12 Years


Job Description:


Experience:


  • Implementing data ingestion pipelines from different types of data sources i.e Databases, S3, Files etc..
  • Experience in building ETL/ Data Warehouse transformation process.
  • Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries,
  • Developing scalable and re-usable, self-service frameworks for data ingestion and processing,
  • Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data,
  • Processing performance analysis and optimization,
  • Bringing best practices in following areas: Design & Analysis, Automation (Pipelining, IaC), Testing, Monitoring, Documentation.
  • Experience working with structured and unstructured data.


Good to have (Knowledge)


  • Experience in cloud-based solutions,
  • Knowledge of data management principles.

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application