Job Description

Job Description

Key Responsibilities:

  • Design and develop scalable data ingestion pipelines using Apache Spark, Databricks or equivalent big-data frameworks.
  • Create streaming ingestion workflows that consume files from cloud storage or messages from Kafka/RabbitMQ/Confluent Cloud and ingest them into Delta Lake ensuring schema evolution and exactly-once semantics.

About this Role

,

Apply for this Position

Ready to join beBeeBigDataEngineer? Click the button below to submit your application.

Submit Application