Job Description
We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions.
With over 8,000 CI&Ters around the world, we've built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.
Key Responsibilities
Design, develop, and maintain scalable data pipelines using Azure Databricks (PySpark, Spark SQL, Delta Lake).
Implement code-first data engineering solutions, following strong software engineering and DataOps principles.
Build and orchestrate data ingestion and transformation pipelines using Azure Data Factory.
Integrate data from multiple sources (APIs, relational databases, event-based systems, files, etc.).
Apply ETL/ELT patterns for analytical and operational use cases.
Ensure data quality, reliability, security, and governance, leveraging Azure-native services and Databricks capabilities.
Apply for this Position
Ready to join CI&T? Click the button below to submit your application.
Submit Application