Job Description
Job description
What you’ll be doingBuild and maintain production-grade data pipelines using SQL, Python and dbtDesign and optimise analytics-ready data models in Snowflake or BigQueryOwn dbt projects end-to-end: models, tests, documentation and deploymentsImplement and maintain CI/CD pipelines for data workflows (Git-based version control, automated testing, promotion between environments)Work closely with analysts and downstream users to turn raw data into reliable, well-modelled datasetsMonitor pipeline performance, data freshness and failures; fix issues before users feel themImprove data quality through testing, observability and better modelling patternsContribute to platform standards and engineering best practicesTech stackSQL (advanced)Pythondbt (essential)Snowflake or BigQueryGit + CI/CD (e.g. GitHub Actions, GitLab CI, Azure DevOps)Cloud data platforms (AWS / GCP / Azure)What we’re looking forDeep SQL capability and real-world Python usage (not just scripts)Proven experience delivering production dbt projectsExperience working with large datasets in Snowflake or BigQueryUnderstanding of CI/CD concepts applied to data (testing, versioning, deployment)Comfortable working in a collaborative, engineering-led environmentPragmatic mindset: you care about reliability, performance and maintainabilityNice to haveOrchestration tools (Airflow, Prefect, Dagster)Data quality / observability toolingExperience in enterprise-scale data platformsExposure to security, access controls and governance in cloud data environmentsWhy Join?Work on a modern, cloud-based data platformInfluence how data is used across market leaderFlexible working and strong focus on wellbeingLong-term career growth in a stable, purpose-driven business
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application