Job Description
SDE 2 – Data Platform Engineer
We are looking for a strong and hands-on SDE 2 – Data Engineering to contribute to the design, development, and scaling of our data infrastructure. The role requires solid technical expertise in building reliable data pipelines, working with big data technologies, and ensuring high-quality data delivery. You will work closely with senior engineers and cross-functional teams to build scalable data systems that support analytics, reporting, and business operations at Licious.
Ideal Experience: 2 - 5 yrs
What You’ll Do
Core Engineering Responsibilities
- Develop and maintain scalable ETL/ELT data pipelines and data processing workflows.
- Contribute to design discussions, provide code reviews, and ensure high-quality engineering practices.
- Implement real-time and batch data processing solutions using technologies such as Spark, Flink, Kafka, or Debezium.
- Ensure data quality, observability, lineage, and monitoring across pipelines.
Data Infrastructure & Architecture
- Participate in building and optimizing data platforms that support high-volume and high-velocity data.
- Contribute to architectural decisions under the guidance of senior engineers and architects.
- Implement best practices around data modeling, partitioning, indexing, and performance optimization.
Collaboration & Cross-team Work
- Work with product, analytics, and ML teams to understand data requirements and deliver reliable data assets.
- Collaborate with Dev Ops and platform teams to ensure efficient, scalable, and cost-optimized deployments.
What You’ll Bring
Technical Skills
- Hands-on experience with big data technologies:
- Batch & Stream Processing: Spark, Flink, Kafka, Debezium.
- Warehousing & Analytics: Redshift, Snowflake, Click House, Hive, Hudi, Iceberg, or Databricks.
- Orchestration: Airflow, Azkaban, or Luigi.
- Strong SQL expertise (My SQL/Postgre SQL) and familiarity with No SQL systems like Mongo DB or Cassandra.
- Working knowledge of AWS services such as S3, EC2, EMR, Glue, RDS, Redshift, Lambda, MSK, or SQS.
- Proficiency in Python, Scala, or Java for building data applications.
- Understanding of software engineering fundamentals — CI/CD, testing, version control.
Soft Skills
- Strong problem-solving skills with the ability to take ownership of components.
- Ability to work independently and collaborate effectively with cross-functional teams.
- Eagerness to learn, adapt, and grow towards higher technical responsibilities.
Preferred Qualifications
- Experience with data lakehouse architectures.
- Familiarity with Docker or Kubernetes.
- Exposure to data observability or MLOps frameworks.
- Experience working in fast-paced or consumer-tech environments.
We are looking for a strong and hands-on SDE 2 – Data Engineering to contribute to the design, development, and scaling of our data infrastructure. The role requires solid technical expertise in building reliable data pipelines, working with big data technologies, and ensuring high-quality data delivery. You will work closely with senior engineers and cross-functional teams to build scalable data systems that support analytics, reporting, and business operations at Licious.
Ideal Experience: 2 - 5 yrs
What You’ll Do
Core Engineering Responsibilities
- Develop and maintain scalable ETL/ELT data pipelines and data processing workflows.
- Contribute to design discussions, provide code reviews, and ensure high-quality engineering practices.
- Implement real-time and batch data processing solutions using technologies such as Spark, Flink, Kafka, or Debezium.
- Ensure data quality, observability, lineage, and monitoring across pipelines.
Data Infrastructure & Architecture
- Participate in building and optimizing data platforms that support high-volume and high-velocity data.
- Contribute to architectural decisions under the guidance of senior engineers and architects.
- Implement best practices around data modeling, partitioning, indexing, and performance optimization.
Collaboration & Cross-team Work
- Work with product, analytics, and ML teams to understand data requirements and deliver reliable data assets.
- Collaborate with Dev Ops and platform teams to ensure efficient, scalable, and cost-optimized deployments.
What You’ll Bring
Technical Skills
- Hands-on experience with big data technologies:
- Batch & Stream Processing: Spark, Flink, Kafka, Debezium.
- Warehousing & Analytics: Redshift, Snowflake, Click House, Hive, Hudi, Iceberg, or Databricks.
- Orchestration: Airflow, Azkaban, or Luigi.
- Strong SQL expertise (My SQL/Postgre SQL) and familiarity with No SQL systems like Mongo DB or Cassandra.
- Working knowledge of AWS services such as S3, EC2, EMR, Glue, RDS, Redshift, Lambda, MSK, or SQS.
- Proficiency in Python, Scala, or Java for building data applications.
- Understanding of software engineering fundamentals — CI/CD, testing, version control.
Soft Skills
- Strong problem-solving skills with the ability to take ownership of components.
- Ability to work independently and collaborate effectively with cross-functional teams.
- Eagerness to learn, adapt, and grow towards higher technical responsibilities.
Preferred Qualifications
- Experience with data lakehouse architectures.
- Familiarity with Docker or Kubernetes.
- Exposure to data observability or MLOps frameworks.
- Experience working in fast-paced or consumer-tech environments.
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application