Job Description
Job Title: Data Engineer with Python(PySpark, Pandas & Azure Synapse)
Location: Hyderabad(Work from office/Hybrid)
Experience Required: 5+ years
About the Role
We are seeking a skilled Python Developer with expertise in PySpark, Pandas, and Azure Synapse Analytics. The role involves building efficient data pipelines, optimizing large-scale data processing workflows, and integrating backend solutions with Synapse for analytics and reporting. This position blends backend development and data engineering responsibilities to deliver scalable, high-performance solutions.
Key Responsibilities
Develop and maintain data pipelines using Python, PySpark, and Pandas. Optimize ETL workflows for handling structured and unstructured data. Integrate with Azure Synapse Analytics for data storage, transformation, and reporting. Work with Azure Data Factory (ADF) or similar tools to orchestrate pipelines. Write optimized PySpark and Pandas code for large-scale data processing. Support backend services/APIs that consume or expose processed data. Collaborate with cross-functional teams (data engineers, analysts, and app developers) to deliver data-driven applications. Implement CI/CD pipelines, monitoring, and logging for production workloads. Ensure data security, compliance, and governance in cloud environments.
Required Skills Strong coding skills in Python with proven experience in Pandas for data wrangling and analysis. Proficiency with PySpark for distributed data processing. Hands-on experience with Azure Synapse Analytics (data warehousing, query optimization, pipelines). Advanced SQL skills for querying and performance tuning. Experience with Azure services (ADF, Data Lake, Functions, DevOps). Knowledge of CI/CD, Docker/Kubernetes, and Git/GitHub. Familiarity with REST APIs and backend integration patterns.
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application