Job Description

Candidates ready to join immediately can share their details via email for quick processing.

📌 CCTC | ECTC | Notice Period | Location Preference

[email protected]

Act fast for immediate attention! ⏳📩


Must-Have Skills

  • 5+ years of experience in Data Engineering
  • Strong hands-on experience with Python
  • Proficiency in PySpark for distributed computation
  • Good knowledge of SQL for data transformations and querying
  • Experience developing pipelines, workflows, and applications on Palantir Foundry
  • Ability to design, build and optimize large-scale data pipelines
  • Familiarity with cloud environments (AWS preferred)
  • Strong debugging, performance tuning, and problem-solving skills
  • Ability to work in fast-paced, Agile environments
  • Immediate joiner preferred

Good-to-Have Skills

  • Experience with Scala
  • Knowledge of AWS services such as S3, Lambda, Glue, EMR, etc.
  • Understanding of CI/CD pipelines and DevOps practices
  • Experience with data modeling, data quality, and governance frameworks
  • Exposure to data visualization or BI tools
  • Experience working in product engineering or large-scale enterprise environments

Key Responsibilities

  • Develop and optimize data pipelines and Foundry applications
  • Write efficient, scalable code using Python, PySpark, SQL
  • Work with cross-functional teams to understand business requirements
  • Perform data ingestion, transformation, validation, and quality checks
  • Optimize distributed workloads and troubleshoot performance issues
  • Deploy and maintain Foundry data assets and workflows
  • Ensure best practices for code quality, versioning, and documentation

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application