Job Description

Responsibilities

Data Engineering and Processing:

• Develop and manage data pipelines using PySpark on Databricks.

• Implement ETL/ELT processes to process structured and unstructured data at scale.

• Optimize data pipelines for performance, scalability, and cost-efficiency in Databricks. Databricks Platform Expertise:

• Experience in Perform Design, Development & Deployment using Azure Services (Data Factory, Databricks, PySpark, SQL).

• Develop and maintain scalable data pipelines and build new Data Source integrations to support increasing data volume and complexity.

• Leverage the Databricks Lakehouse architecture for advanced analytics and machine learning workflows.

• Manage Delta Lake for ACID transactions and data versioning.

• Develop notebooks and workflows for end-to-end data solutions. Cloud Platforms and Deployment:

• Deploy and manage Databricks on Azure (e.g., Azure Databr...

Apply for this Position

Ready to join VAYUZ Technologies? Click the button below to submit your application.

Submit Application