Job Description
Unlock the Future of Data Engineering
Our Mission:
We strive to provide innovative, business-driven solutions that set a benchmark in consulting.
Job Description:
Distributed Data Processing Specialist
A talented data engineer is required to develop scalable data pipelines and process large datasets using Databricks and PySpark.
- Design and implement distributed data processing systems with Apache Spark on Databricks
- Create NoSQL schema designs for MongoDB and perform performance tuning
- Ingest, transform, and orchestrate data using Delta Lake, Apache Airflow, and Azure Data Factory
- Implement CI/CD pipelines for data engineering projects using Git version control
- Foster Agile methodologies and DevOps practices in collaborative environments
/li>
Apply for this Position
Ready to join beBeeDataEngineer? Click the button below to submit your application.
Submit Application