Job Description
Job Title: PySpark Data Engineer
Experience: 4 + Years
Location: Hyderabad
Job Summary:
We are seeking a Senior Spark Engineer to design and implement high-performance Spark execution patterns inside xFlows, supporting batch and streaming pipelines with built-in data quality, observability, and governance.
Requirements
Key Responsibilities
- Design and implement Spark-based execution frameworks for xFlows pipelines.
Build reusable Spark components for:
- Readers (JDBC, Files, Kafka, CDC)
- Transformers (Join, Filter, Aggregate, Window, Union)
- Writers (Iceberg, Delta, Parquet, Snowflake)
- Readers (JDBC, Files, Kafka, CDC)
- Optimize Spark performance (partitioning, caching, shuffles, memor...
Apply for this Position
Ready to join DATAECONOMY? Click the button below to submit your application.
Submit Application