Job Description
About Zeno
An unprecedented energy transition has begun. To meet 2040 net zero goals, over 2 billion electric two-wheelers (motorcycles) must be sold and $11 trillion in fuel consumption replaced. Zeno is building a tech platform to electrify this two-wheeler market, currently responsible for more than 4% of the world’s GHG emissions. Our mission is to accelerate the energy transition and democratize the benefits across Africa, India, and Latin America. With a focus on East Africa as a starting point, Zeno is building a new energy ecosystem with ground-up development of a fundamentally better electric motorcycle and associated battery swap network to drive a better experience for our customers.
We are looking for audacious, creative, and committed people to join us on this important mission. You will be joining a bold team consisting of leading engineers, operators, and entrepreneurs hailing from companies that include Tesla, Apple, Google, SRAM, Ola Electric, Dott, LiveWire, Lucid, Bolt, Microsoft, SafeBoda, Sun Mobility, among many others. Zeno has just closed its seed round with Lowercarbon, Silicon Valley’s leading climate tech venture fund, Toyota Ventures, the VC of the world’s largest automotive company, and 4DX Ventures, a leading early-stage investor in Africa.
The Role
Zeno is seeking a highly skilled and hands-on Data Engineer to design, build, and scale our data ecosystem. This role will be central to enabling data-driven decision-making across the organization by managing large volumes of structured and unstructured data, building a robust data lake, and exposing reliable datasets and reports to multiple teams including product, operations, finance, and leadership.
The ideal candidate combines strong engineering fundamentals with a practical understanding of analytics and operational use cases, ensuring data is accessible, accurate, reliable, and actionable across the business.
What You’ll Do
Data Architecture & Ecosystem Design
- Design, build, and maintain scalable data architecture including cloud-based data lakes, warehouses, and streaming systems to support ingestion, storage, processing, and analytics of large datasets.
Data Pipeline Development
- Develop and optimize reliable batch and streaming ETL/ELT pipelines using technologies such as Apache Spark, Flink, Beam, Kafka, and Airflow.
- Ingest data from multiple sources including internal systems, IoT devices, APIs, and third-party platforms.
Data Lake & Warehouse Management
- Build and manage cloud-native data lakes and warehouses using platforms such as BigQuery, Snowflake, or Redshift, ensuring performance, scalability, and cost efficiency.
Data Modeling & Transformation
- Create efficient data models, transformations, and aggregations using SQL and modern data processing frameworks to support analytics, reporting, and downstream consumption.
Analytics & Reporting Enablement
- Expose clean, well-documented datasets to BI and analytics tools such as Looker, Grafana, and Kibana to support dashboards, operational reporting, and ad-hoc analysis.
Data Quality, Reliability & Observability
- Implement data validation, monitoring, alerting, and quality checks to ensure accuracy, completeness, and consistency across pipelines and datasets.
Performance & Scalability Optimization
- Optimize data storage, query performance, and processing workflows to handle high data volumes and evolving business needs.
Collaboration with Stakeholders
- Work closely with product, operations, finance, and analytics teams to translate business requirements into scalable data engineering solutions.
Security, Governance & Compliance
- Implement data access controls, governance standards, and best practices for data privacy, security, and compliance across the data platform.
DevOps & Engineering Best Practices
- Build and deploy data services using Docker and Kubernetes.
- Set up and maintain CI/CD pipelines for data infrastructure and code deployments.
- Use GitHub/GitLab for version control, collaboration, and code reviews.
Documentation & Best Practices
- Maintain clear documentation of data pipelines, schemas, and processes, and establish best practices for data engineering across teams.
What You Bring
- Proven experience as a Data Engineer working with large-scale data systems and complex data pipelines
- Strong programming skills in Go, Rust, or Python, with deep expertise in SQL
- Hands-on experience with Apache Spark, Flink, Beam, Kafka, and Airflow
- Experience with cloud data warehouses such as BigQuery, Snowflake, or Redshift
- Strong experience with AWS and/or GCP cloud environments
- Solid understanding of data lake architectures, ETL/ELT design, and performance optimization
- Experience deploying and operating data systems using Docker and Kubernetes
- Familiarity with CI/CD pipelines and modern DevOps practices
- Experience enabling analytics and reporting using tools such as Looker, Grafana, or Kibana
- Proficiency with GitHub or GitLab for source control and collaboration
- Strong problem-solving skills with a high bar for data quality and reliability
- Ability to translate business and analytics requirements into scalable technical solutions
- Comfort working in a fast-paced, evolving environment with multiple stakeholders
- Clear communication skills and the ability to explain data systems to both technical and non-technical audiences
Benefits:
- Competitive salary based on experience
- Company sponsored healthcare plan.
- Join a world class team of engineers, operators, and entrepreneurs from across the globe who are part of the inevitable trillion-dollar transition of two-wheelers to electric
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application