Job Description
– Azure Databricks using Python/ PySpark/ SparkSQL.
Exp Range – 6+ yrs.
Job Description -
- Bachelor’s degree / Master's Degree with high rankings from reputed colleges
- 3+ years ETL /Data Analysis experience with a reputed firm
- Expertise in Big Data Managed Platform Environment like Databricks using Python/ PySpark/ SparkSQL
- Experience in handling large data volumes and orchestrating automated ETL/ data pipelines using CI/CD and Cloud Technologies.
- Experience of deploying ETL / data pipelines and workflows in cloud technologies and architecture such as Azure and Amazon Web Services will be valued
- Experience in Data modelling (e.g., database structure, entity relationships, UID etc.) , data profiling, data quality validation.
- Experience adopting software development best practices (e.g., modularization, testing, refactoring, etc.)
- Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools
- Excellent written and verbal communication skills in English
- Self-motivated with strong sense of problem-solving, ownership and action-oriented mindset
- Able to cope with pressure and demonstrate a reasonable level of flexibility/adaptability
- Track record of strong problem-solving, requirement gathering, and leading by example
- Able to work well within teams across continents/time zones with a collaborative mindset
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application