Job Description

Job Title: PySpark Data Engineer Experience: 4+ Years Location: Hyderabad Job Summary: We are seeking a Senior Spark Engineer to design and implement high-performance Spark execution patterns inside xFlows, supporting batch and streaming pipelines with built-in data quality, observability, and governance. Requirements Key Responsibilities Design and implement Spark-based execution frameworks for xFlows pipelines. Build reusable Spark components for: Readers (JDBC, Files, Kafka, CDC) Transformers (Join, Filter, Aggregate, Window, Union) Writers (Iceberg, Delta, Parquet, Snowflake) Optimize Spark performance (partitioning, caching, shuffles, memory). Implement Data Quality & Reconciliation execution patterns. Handle schema evolution, CDC, watermarking, and checkpoints. Integrate Spark jobs with EMR Serverless / Data bricks / Kubernetes. Publish execution metrics, logs, and lineage for observability. Work closely with platform & UI teams to support no-code execution. Required Skills 4+ ye...

Apply for this Position

Ready to join DATAECONOMY? Click the button below to submit your application.

Submit Application