Job Description
Title: Big Data Engineer (2 Positions)
Company: Powercozmo
Industry: B2 B e Commerce Platform
Experience: 3+ Years
Employment Type: Full-time
THE JOB LOCATION INITIALLY WILL BE IN AMMAN, JORDAN. THE JOB REQUIRES VALID PASSPORT HOLDERS WHO CAN TRAVEL IMMEDIATELY (within 2-3 days) AND WORK ON-SITE.
About Power Cozmo
Power Cozmo is a growing B2 B e Commerce platform focused on building scalable, data driven solutions for enterprises. Our platform relies heavily on modern cloud and big data technologies to power analytics, personalization, integrations, and operational intelligence. We are building robust data foundations that scale with the business, and we’re looking for engineers who want to grow with a startup and take ownership of core systems.
Role Overview
We are hiring two Big Data Engineers to design, build, and maintain scalable data
platforms and pipelines on AWS. You will work on batch and near-real-time data
processing, lakehouse architectures, and high-performance analytics systems.
This role is well-suited for engineers who enjoy end-to-end ownership, hands-on
development, and working in a fast-paced startup environment.
Key Responsibilities
Data Engineering & Pipelines
• Design, develop, and maintain ETL / ELT pipelines for batch and near-real-time data processing
• Build data ingestion systems using Py Spark, Kafka, webhooks, APIs, and event driven architectures
• Process structured, semi-structured, and unstructured data from multiple sources.
• Ensure data quality, reliability, monitoring, and performance optimization Big Data, Analytics & Lakehouse.
• Design and manage lakehouse architectures combining data lakes and analytical engines.
• Optimize data storage, partitioning, and query performance.
• Enable downstream analytics, reporting, and ML use cases AWS & Cloud Technologies.
• Work extensively with AWS services, including but not limited to: S3, Glue, EMR, Athena, Redshift Lambda, EC2, IAM, Cloud Watch.
• Deploy, monitor, and optimize data workloads in AWS
• Apply cloud best practices for scalability, cost, and security
Databases & Data Platforms
• Design, create, and administer SQL and No SQL databases.
• Perform schema design, indexing, performance tuning, and access control
• Work with high-performance analytical databases such as Click House
• Implement and maintain Graph Databases (Graph DB / Neo4j or similar) for relationship-based use cases.
Collaboration & Ownership
• Collaborate with backend, frontend, product, and analytics teams
• Participate in architecture discussions and technical decision-making
• Take ownership of data systems in a startup environment
Required Qualifications
• Education: BTech (Computer Science / IT) or MCA Experience
• Minimum 3 years of corporate project experience as a Big Data Engineer / Data Engineer
• Hands-on experience delivering production-grade data systems
Technical Skills
• Strong hands-on experience with Py Spark
• Experience with ETL / ELT concepts and data modeling
• Experience building pipelines using Kafka, streaming systems, or webhooks
• Solid experience working with AWS cloud services
• Strong SQL skills and understanding of distributed systems
Preferred / Nice-to-Have Skills
• Experience with Graph Databases (Graph DB, Neo4j, or similar)
• Hands-on experience with Click House for large-scale analytical workload
Company: Powercozmo
Industry: B2 B e Commerce Platform
Experience: 3+ Years
Employment Type: Full-time
THE JOB LOCATION INITIALLY WILL BE IN AMMAN, JORDAN. THE JOB REQUIRES VALID PASSPORT HOLDERS WHO CAN TRAVEL IMMEDIATELY (within 2-3 days) AND WORK ON-SITE.
About Power Cozmo
Power Cozmo is a growing B2 B e Commerce platform focused on building scalable, data driven solutions for enterprises. Our platform relies heavily on modern cloud and big data technologies to power analytics, personalization, integrations, and operational intelligence. We are building robust data foundations that scale with the business, and we’re looking for engineers who want to grow with a startup and take ownership of core systems.
Role Overview
We are hiring two Big Data Engineers to design, build, and maintain scalable data
platforms and pipelines on AWS. You will work on batch and near-real-time data
processing, lakehouse architectures, and high-performance analytics systems.
This role is well-suited for engineers who enjoy end-to-end ownership, hands-on
development, and working in a fast-paced startup environment.
Key Responsibilities
Data Engineering & Pipelines
• Design, develop, and maintain ETL / ELT pipelines for batch and near-real-time data processing
• Build data ingestion systems using Py Spark, Kafka, webhooks, APIs, and event driven architectures
• Process structured, semi-structured, and unstructured data from multiple sources.
• Ensure data quality, reliability, monitoring, and performance optimization Big Data, Analytics & Lakehouse.
• Design and manage lakehouse architectures combining data lakes and analytical engines.
• Optimize data storage, partitioning, and query performance.
• Enable downstream analytics, reporting, and ML use cases AWS & Cloud Technologies.
• Work extensively with AWS services, including but not limited to: S3, Glue, EMR, Athena, Redshift Lambda, EC2, IAM, Cloud Watch.
• Deploy, monitor, and optimize data workloads in AWS
• Apply cloud best practices for scalability, cost, and security
Databases & Data Platforms
• Design, create, and administer SQL and No SQL databases.
• Perform schema design, indexing, performance tuning, and access control
• Work with high-performance analytical databases such as Click House
• Implement and maintain Graph Databases (Graph DB / Neo4j or similar) for relationship-based use cases.
Collaboration & Ownership
• Collaborate with backend, frontend, product, and analytics teams
• Participate in architecture discussions and technical decision-making
• Take ownership of data systems in a startup environment
Required Qualifications
• Education: BTech (Computer Science / IT) or MCA Experience
• Minimum 3 years of corporate project experience as a Big Data Engineer / Data Engineer
• Hands-on experience delivering production-grade data systems
Technical Skills
• Strong hands-on experience with Py Spark
• Experience with ETL / ELT concepts and data modeling
• Experience building pipelines using Kafka, streaming systems, or webhooks
• Solid experience working with AWS cloud services
• Strong SQL skills and understanding of distributed systems
Preferred / Nice-to-Have Skills
• Experience with Graph Databases (Graph DB, Neo4j, or similar)
• Hands-on experience with Click House for large-scale analytical workload
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application