Job Description

Main Duties

:
Data Pipeline Development and Management · Design, build, and maintain data pipelines and ELT/ETL workflows in AWS to support enterprise analytics and reporting.· Develop and optimize dbt models that transform raw data into reliable, well-structured datasets for business use.· Orchestrate workflows using Apache Airflow (MWAA) to ensure automation, reliability, and reproducibility of data processes.· Partner with the Data Engineering team to align data architecture and ensure schema design supports analytics and business requirements.· Monitor and optimize pipeline performance, reliability, and data quality across all layers of the data stack.
Analytics Enablement and Collaboration · Collaborate with analysts and business stakeholders to understand data needs and deliver governed, high-quality datasets.· Build and maintain Power BI data models and support analysts in optimizing report performance and structure.· Ensure data models and transformations are well-documented, version-controlled, and tested as part of a mature CI/CD analytics workflow.· Engage with the broader analytics team to identify opportunities for automation, standardization, and improved data accessibility.
Data Governance and Quality Assurance · Support Welo Data’s ongoing data quality and governance initiatives by ensuring accuracy, consistency, and traceability within pipelines and data models.· Collaborate with the Data Quality & Governance Lead to identify and resolve data quality issues and contribute to continuous improvement of data management processes.· Contribute to the development and maintenance of metadata, lineage, and documentation standards for analytics assets.
Agile Collaboration and Delivery · Participate in sprint planning, retrospectives, and backlog grooming using Jira to manage tasks and progress.· Document architecture decisions, data models, and workflows in Confluence to promote transparency and knowledge sharing.· Operate effectively in an Agile environment, balancing technical rigor with responsiveness to evolving business needs.

Additional Job Description:

  • Bachelor’s degree in Computer Science, Information Systems, or a related field strongly preferred; equivalent experience acceptable. Master’s degree is a plus.
  • Minimum of 4 years of experience in data engineering, analytics engineering, or a related role with demonstrated impact in building and maintaining data pipelines and models.
  • Proficiency in SQL and dbt for data transformation and modeling.
  • Hands-on experience with AWS services, including Redshift, S3, Lambda, and Glue.
  • Experience with Airflow (MWAA or similar orchestration tools) for workflow management.
  • Familiarity with CI/CD workflows and version control using Git.
  • Understanding of data warehousing concepts, dimensional modeling, and ELT design principles.
  • Experience working within Agile methodologies, with practical use of Jira and Confluence for planning and documentation.
  • Excellent communication and collaboration skills, with the ability to engage effectively across technical and non-technical stakeholders.
  • (Preferred) Proficiency in Power BI, including data modeling and performance optimization.
  • (Preferred) Knowledge of Python for automation and data pipeline development.
  • We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

    Apply for this Position

    Ready to join ? Click the button below to submit your application.

    Submit Application