Job Description
Key Responsibilities:
Design, develop, and manage ETL pipelines using Azure Databricks and PySpark
Integrate and process data using Azure Data Lake Storage (ADLS)
Orchestrate workflows and schedule jobs on Databricks
Perform performance tuning, optimization, and system maintenance
Automate operational tasks and support incident resolution
Collaborate with stakeholders and participate in Agile ceremonies
Required Skills:
Hands-on experience with Azure Databricks for ETL and big data pipelines
Strong PySpark and ETL development skills
Proficiency in Azure Data Lake Storage (ADLS)
Experience with Apache NiFi for data ingestion and workflows
Good problem-solving and troubleshooting skills
Good to Have:
Exposure to Azure Synapse, Azure Data Factory, Azure SQL, or Azure Functions
Knowledge of streaming or real-time data pipelines
Strong communication and cross-team collaboration skills
Requirements
Azure Databricks, ETL Development, Azure Data Lake Storage, PySpark, Apache NiFi, Azure Cloud Services
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application