Job Description

Job Title: Senior / Staff Full-Stack Data Engineer – Databricks 

About Us

At Codvo, we are committed to building scalable, future-ready data platforms that power business impact. We believe in a culture of innovation, collaboration, and growth, where engineers can experiment, learn, and thrive. Join us to be part of a team that solves complex data challenges with creativity and cutting-edge technology.


Job Description: Senior / Staff Full-Stack Data Engineer – Databricks

Role Overview

We are seeking a Senior / Staff Full-Stack Data Engineer with deep Databricks expertise to design, build, and operate scalable data and machine learning pipelines. This role works closely with data scientists, platform teams, and application engineers to productionize analytics and ML workloads with high reliability, performance, and cost efficiency.

Key Responsibilities

Design, build, and maintain ETL/ELT pipelines on Databricks using Spark, Delta Lake, andDatabricks Workflows

Build and operate batch and real-time data pipelines for ingestion, transformation, andorchestration

Operationalize machine learning inference pipelines authored by data scientists (batch andreal-time)

Ensure consistency between model training and inference environments

Implement data quality checks, validation rules, monitoring, alerting, and automated recovery- Collaborate with data scientists to productionize models and optimize inference performance and cost

Implement CI/CD, DevOps, and MLOps best practices for data pipelines and ML workflows

Optimize compute, storage, and job configurations for performance and cost efficiency- Implement and manage enterprise data governance using Unity Catalog (schemas, lineage, ownership, documentation)

Work with Databricks infrastructure and platform configurations

Required Skills & Experience

Strong hands-on experience with Databricks, Apache Spark, and Delta Lake

Proven experience building and operating production-grade data pipelines

Experience operationalizing machine learning models and inference pipelines

Strong understanding of data reliability, observability, and monitoring practices

Experience with CI/CD, DevOps, and MLOps workflows

Experience working with cloud platforms (AWS or Azure)

Familiarity with Unity Catalog and enterprise data governance concepts- Experience with spec-driven development and coding agents

Nice to Have

Experience with Databricks infrastructure tuning and cost optimization

Exposure to streaming frameworks and real-time data processing- Experience with Infrastructure-as-Code (Terraform or similar)

What Success Looks Like

Reliable, scalable, and cost-efficient Databricks data and ML pipelines

Smooth productionization of ML models with strong collaboration across teams

High data quality, observability, and platform stability

Well-governed data assets with clear ownership and lineage


Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application