Job Description
Responsibilities
- Build and maintain ETL/ELT pipelines for batch and streaming workloads
- Implement transformations, cleansing, and enrichment in Databricks
- Automate deployments and pipeline orchestration
- Ensure data quality via validation, monitoring, and alerting
- Operate an AWS-based lakehouse and optimize Databricks jobs/clusters for cost and performance
- Implement governance, lineage, access controls, and auditing across workspaces and cloud platforms
Qualifications
- 3+ years of data engineering experience on cloud and distributed systems
- Strong knowledge of Spark
- Experience with AWS data services and building lakehouse solutions
- Familiarity with orchestration tools for data pipelines
- DevOps/CI/CD experience (Terraform, GitHub Actions, Jenkins)
What we offer
- 24 calendar days of paid vacation per year (afte...
Apply for this Position
Ready to join Soft Industry Alliance LLC? Click the button below to submit your application.
Submit Application