Job Description
We are looking for energetic, high-performing and highly skilled GCP Data Engineers to help shape our technology and product roadmap.
Job Description:
Responsibilities:
- Develop and maintain large scale data processing pipeline using PySpark Data Proc, Big Query and SQL.
- Use Big Query and Data proc to migrate existing Hadoop/Spark/Hive workloads to Google Cloud.
- Proficient in Big Query to carry out batch and interactive data analysis.
- Function as member of an agile team by contributing to software builds through consistent development practices (tools, common components, and documentation)
- Develops and tests software, including ongoing refactoring of code, and drives continuous improvement in code structure and quality
- Enable the deployment, support, and monitoring of software across test, integration, and production environments
Minimum Qualifications:
...Apply for this Position
Ready to join IntraEdge? Click the button below to submit your application.
Submit Application