Job Description

Role: Data Engineer – Big Data (Batch & Streaming)

Location: Bengaluru, India (Hybrid – 3 days onsite per week)

Employer: Flexton Inc.

Client Domain: Globally recognized Retail / Marketplace organization

Note: Candidates must be available to start immediately or within a 30-day notice period


Job Summary:

We are seeking a Data Engineer to build, enhance, and support large-scale batch and streaming data pipelines for fraud detection and risk analysis. The role focuses on processing user and transactional data using Spark SQL, Flink SQL, Hadoop/HDFS and light Java work for MapReduce integration, and production support to ensure timely, accurate, and complete data availability.


Must Have Skills:

  • Expert-level Spark SQL experience on Hadoop-based platforms , including performance tuning, join optimization, partitioning strategies, and troubleshooting job failures in production environments.
  • Proven experience with incremental data processing , including upserts, late-arriving data handling, reruns, and backfills , ensuring data accuracy and consistency at scale.
  • Solid understanding of the Hadoop ecosystem (HDFS, MapReduce concepts, data layouts).
  • Practical Flink SQL experience for streaming pipelines (windows, joins, state handling).
  • Advanced SQL skills , including complex joins, window functions, and time-based logic on large datasets.
  • Hands-on experience with the Hadoop ecosystem (HDFS, MapReduce, YARN/EMR), with working knowledge of reading and debugging MapReduce/Spark jobs (basic Java understanding required).

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application