Job Description

Job Description

Company Profile:

Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time. We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 120 employees. Easebuzz's corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.

Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.

Salary: As per company standards.

Designation: Data Engineering

Location: Pune

Overview:

We are seeking a talented and motivated Data Engineer with 2-5 years of experience in stream processing, particularly utilizing Flink and Kafka, alongside expertise in Spark and AWS technologies. As a Data Engineer, you will play a crucial role in designing, implementing, and maintaining robust stream processing solutions to handle large volumes of data in real-time, ensuring high performance, scalability, and reliability.

Responsibilities:

Stream Processing Development: Design, develop, and optimize stream processing pipelines using Apache Flink and Kafka to process real-time data streams efficiently. 

Data Ingestion: Implement robust data ingestion pipelines to collect, process, and distribute streaming data from various sources into the Flink and Kafka ecosystem.

Data Transformation: Perform data transformation and enrichment operations on streaming data using Spark Streaming and other relevant technologies to derive actionable insights.

Performance Optimization: Continuously optimize stream processing pipelines for performance, scalability, and reliability, ensuring low-latency and high-throughput data processing.

Monitoring and Troubleshooting: Monitor stream processing jobs, troubleshoot issues, and implement necessary optimizations to ensure smooth operation and minimal downtime.

Integration with AWS Services: Leverage AWS technologies such as Amazon Kinesis, AWS Lambda, Amazon EMR, and others to build end-to-end stream processing solutions in the cloud environment.

Data Governance and Security: Implement data governance and security measures to ensure compliance with regulatory requirements and protect sensitive data in streaming pipelines.

Collaboration: Collaborate closely with cross-functional teams including data scientists, software engineers, and business stakeholders to understand requirements and deliver impactful solutions.

Documentation: Create and maintain comprehensive documentation for stream processing pipelines, including design specifications, deployment instructions, and operational procedures.

Continuous Learning: Stay updated with the latest advancements in stream processing technologies, tools, and best practices, and incorporate them into the development process as appropriate.

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • 2-5 years of professional experience in data engineering roles with a focus on stream processing.
  • Strong proficiency in Apache Flink and Kafka for building real-time stream processing applications.
  • Hands-on experience with Spark and Spark Streaming for batch and stream processing.
  • Solid understanding of cloud computing platforms, particularly AWS services such as Amazon
  • Kinesis, AWS Lambda, Amazon EMR, etc.
  • Proficiency in programming languages such as Java, Scala, or Python.
  • Experience with containerization and orchestration tools like Docker and Kubernetes is a plus.
  • Excellent problem-solving skills and the ability to troubleshoot complex distributed systems.
  • Strong communication skills and the ability to work effectively in a collaborative team environment.

Employment Type

Full-time

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application