Job Description
Responsibilities
- Architect and lead Databricks Lakehouse solutions using Delta Lake, Unity Catalog, and MLflow.
- Design and develop high-volume batch and real-time pipelines using Spark (PySpark / Scala).
- Implement bronze–silver–gold data patterns, medallion architecture, and scalable data models.
- Build ingestion frameworks using ADF, cloud-native services, Kafka, and Event Hub.
- Define and enforce coding standards, performance benchmarks, and cost controls.
- Lead Databricks cluster strategy, job orchestration, and workload optimisation.
- Implement data quality checks, validation rules, reconciliation, and DQ scorecards.
- Enable metadata management, lineage, and cataloguing across enterprise datasets.
- Design secure access using RBAC/ABAC, Unity Catalog, data masking, and row/column-level security.
- Integrate CI/CD pipelines for notebooks, jobs, and infrastructure (Git, Azure DevOps...
Apply for this Position
Ready to join TECH-HIRE (S) PTE. LTD.? Click the button below to submit your application.
Submit Application