Job Description

:

  • At least 6 years of experience as a Data Engineer, working with Hadoop, Spark, and data processing technologies in large-scale environments
  • Experience with Graphana, Prometheus, Splunk will be an added benefit
  • Quantexa exposure and/or certification a strong plus
  • Strong expertise in designing and developing data infrastructure using Hadoop, Spark, and related tools (HDFS, Hive, Pig, etc)
  • Experience with containerization platforms such as OpenShift Container Platform (OCP) and container orchestration using Kubernetes
  • Proficiency in programming languages commonly used in data engineering, such as Spark, Python, Scala, or Java
  • Knowledge of DevOps practices, CI/CD pipelines, and infrastructure automation tools (e.g., Docker, Jenkins, Ansible, BitBucket)
  • Experience with jobs schedulers like Control-m

Apply for this Position

Ready to join UNISON Group? Click the button below to submit your application.

Submit Application