Job Description

Locations - Indore / Noida / Bengaluru / Pune /Gurgaon


Job Description


  • 2–5 years of hands-on experience as a Data Engineer
  • Strong experience with Databricks and Apache Spark
  • Proficiency in Python for data engineering and ETL development
  • Experience building and managing ETL pipelines
  • Experience with cloud platforms (AWS / Azure / GCP) – at least one
  • Familiarity with Delta Lake and optimized data storage formats
  • Understanding of Generative AI fundamentals (LLMs, embeddings, RAG concepts)


Good to Have


  • Hands-on experience with GenAI frameworks (Lang Chain, Lang graph, etc.)
  • Experience working with vector databases (PGVector, Pinecone, Chroma, etc.)
  • Knowledge of data orchestration tools (Airflow, Databricks Workflows)

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application