Job Description
DevOps Engineer - Database, PostgreSQL
Department: Data – DevOps
Reporting to: DevOps Manager
At the heart of this platform sits one of the most critical capabilities in our technology estate: reliable, scalable, self-service databases .
We’re looking for a Database Reliability Engineer who thinks and operates like a DevOps engineer first , with deep expertise in databases. You’ll be hands-on, designing and running production-grade PostgreSQL platforms, but you’ll also help define the way databases are delivered across the organisation. Your work will directly shape how teams provision, operate, and scale data services through our Internal Developer Platform (IDP).
If you’re passionate about operating databases the DevOps way , and thrive at the intersection of platform engineering, automation, and data , this role is made for you.
Owning the Database Platform
You’ll take ownership of our database services from the ground up — designing, building, and operating production-ready PostgreSQL environments (with room to grow into other database technologies).
Design, deploy, and operate PostgreSQL clusters with a strong focus on availability, resilience, and performance
Build and maintain automation for database provisioning, upgrades, and lifecycle management using Terraform, Ansible, Helm, and Kubernetes Operators
Our goal is simple: developers should be able to consume database services quickly, safely, and with minimal friction .
Embed database capabilities directly into the IDP via self-service portals, APIs, and GitOps workflows
Define and maintain “golden paths” for database provisioning and day-2 operations
Work closely with application teams, SREs, and platform engineers to integrate databases seamlessly into application lifecycles
You’ll ensure our data platforms are observable, measurable, and trustworthy.
Implement comprehensive monitoring, logging, and alerting using tools such as Prometheus, Grafana, ELK, or similar
Define SLOs, performance benchmarks, and reliability standards for database workloads
Embed security controls, governance, and auditability directly into automation pipelines
Partner with engineering teams to understand their needs and evolve our database offerings
Create clear documentation, onboarding guides, and training materials so teams can self-serve with confidence
Act as a database and platform SME within the DevOps and Platform Engineering organisation
PostgreSQL Expertise: 3+ years running PostgreSQL in production, including administration, tuning, HA/DR, and migrations
Kubernetes & DevOps Mindset: Strong Kubernetes fundamentals and hands-on experience running stateful workloads , ideally with Operators
Automation First: Proven experience with Infrastructure-as-Code and automation (Terraform, Ansible, Helm, or similar)
Software Engineering Skills: Ability to code in Python, Go, or similar to build automation, integrations, and APIs
Experience with monitoring and alerting platforms such as Prometheus, Grafana, ELK, or Datadog
Collaboration & Communication: Comfortable working across teams and communicating clearly in English
Experience with other data technologies such as MySQL, Elasticsearch, Cassandra , or similar
Exposure to public cloud platforms ( AWS, GCP, Azure )
Please apply directly with your most up to date CV for immediate consideration.
Department: Data – DevOps
Reporting to: DevOps Manager
At the heart of this platform sits one of the most critical capabilities in our technology estate: reliable, scalable, self-service databases .
We’re looking for a Database Reliability Engineer who thinks and operates like a DevOps engineer first , with deep expertise in databases. You’ll be hands-on, designing and running production-grade PostgreSQL platforms, but you’ll also help define the way databases are delivered across the organisation. Your work will directly shape how teams provision, operate, and scale data services through our Internal Developer Platform (IDP).
If you’re passionate about operating databases the DevOps way , and thrive at the intersection of platform engineering, automation, and data , this role is made for you.
Owning the Database Platform
You’ll take ownership of our database services from the ground up — designing, building, and operating production-ready PostgreSQL environments (with room to grow into other database technologies).
Design, deploy, and operate PostgreSQL clusters with a strong focus on availability, resilience, and performance
Build and maintain automation for database provisioning, upgrades, and lifecycle management using Terraform, Ansible, Helm, and Kubernetes Operators
Our goal is simple: developers should be able to consume database services quickly, safely, and with minimal friction .
Embed database capabilities directly into the IDP via self-service portals, APIs, and GitOps workflows
Define and maintain “golden paths” for database provisioning and day-2 operations
Work closely with application teams, SREs, and platform engineers to integrate databases seamlessly into application lifecycles
You’ll ensure our data platforms are observable, measurable, and trustworthy.
Implement comprehensive monitoring, logging, and alerting using tools such as Prometheus, Grafana, ELK, or similar
Define SLOs, performance benchmarks, and reliability standards for database workloads
Embed security controls, governance, and auditability directly into automation pipelines
Partner with engineering teams to understand their needs and evolve our database offerings
Create clear documentation, onboarding guides, and training materials so teams can self-serve with confidence
Act as a database and platform SME within the DevOps and Platform Engineering organisation
PostgreSQL Expertise: 3+ years running PostgreSQL in production, including administration, tuning, HA/DR, and migrations
Kubernetes & DevOps Mindset: Strong Kubernetes fundamentals and hands-on experience running stateful workloads , ideally with Operators
Automation First: Proven experience with Infrastructure-as-Code and automation (Terraform, Ansible, Helm, or similar)
Software Engineering Skills: Ability to code in Python, Go, or similar to build automation, integrations, and APIs
Experience with monitoring and alerting platforms such as Prometheus, Grafana, ELK, or Datadog
Collaboration & Communication: Comfortable working across teams and communicating clearly in English
Experience with other data technologies such as MySQL, Elasticsearch, Cassandra , or similar
Exposure to public cloud platforms ( AWS, GCP, Azure )
Please apply directly with your most up to date CV for immediate consideration.
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application