Job Description

Azure Data Engineer

Req number:

R6875

Employment type:

Full time

Worksite flexibility:

Hybrid

Who we are

CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise.

Job Summary

We are looking for a motivated Azure Data Engineer ready to take us to the next level! If you understand Databricks, Python, Azure Data Services, Azure DevOps, and API integrations and are looking forward to your next career move, apply now

Job Description

We are looking for Azure  Data Engineer This position will  be full-time  and  Hybrid Bangalore.


What You’ll Do

  • Design, build, and optimize data ingestion and ETL/ELT pipelines using  Azure Databricks   and  Azure Data Factory .

  • Work extensively with  Delta Lake ,  Structured Streaming , and Lakehouse architecture.

  • Develop scalable  PySpark   and  Python   transformation logic for batch and streaming workloads.

  • Design and maintain data models (Dimensional, SCD Types 1 & 2, Lakehouse schemas).

  • Manage data storage using Azure Data Lake Gen2, Delta tables, and Parquet formats.

  • Integrate with REST/Graph/Custom APIs for data access and ingestion, including  OAuth   or managed authentication flows.

  • Implement  CI/CD pipelines   for data solutions using Azure DevOps (Repos, Pipelines, Release automation).

  • Implement strong security practices RBAC, Key Vault, managed identities,  network security .

  • Maintain  documentation ,  knowledge sharing , and best practices across the team.

  • Knowledge of CI/CD for Databricks using Databricks Asset Bundles / DABs.

What You'll Need

Required:

  • 7–8 years of total experience in Data Engineering.

  • Azure Databricks   (Jobs, Workflows, SQL Warehouses, Unity Catalog).

  • Python   /  PySpark .

  • Azure Data Factory   &  Azure Data Lake Storage .

  • Azure DevOps (Git branching,  CI/CD pipelines ).

  • API integration   (REST, Graph API, token-based auth).

  • Strong command of  SQL performance tuning , CTEs, window functions.

  • Knowledge of distributed systems, partitioning, and big data processing.

  • Experience with  Delta Lake   principles (ACID, time travel, vacuum, Z-ordering).

  • Understanding of  Data Governance   concepts.

Preferred:

  • Experience with  Event-driven architecture   (Kafka/Event Hub).

  • Knowledge of  python   backend.

Physical Demands

  • Sedentary work that involves sitting or remaining stationery most of the time with occasional need to move around the office to attend meetings, etc.

  • Ability to conduct repetitive tasks on a computer, utilizing a mouse, keyboard, and monitor.

Reasonable accommodation statement

If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to or (888) 824 – 8111.

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application