Job Description


Title: Lead AWS Data Engineer
Location: % remote

Duration: months with extension based on performance

Work Requirements: US Citizen, GC Holders or Authorized to Work in the


Skillset / Experience:





We are seeking two Senior Data Engineers to join the POP Platform build-out initiative for Q . These engineers will play a key role in designing, developing, and optimizing cloud-based data pipelines and integrations that support large-scale analytics and data-driven decision-making.

The ideal candidates will have strong AWS expertise, hands-on experience with Snowflake, and a proven background in Python/Spark-based data engineering within a global, cross-functional environment. Key Responsibilities
  • Design, build, and maintain scalable ETL/ELT pipelines using Python, PySpark, and AWS services (Glue, Lambda, S, EMR, Step Functions).

  • Develop and optimize data models and warehouse schemas in Snowflake, ensuring high performance and reliability.

  • Implement data ingestion frameworks integrating structured, semi-structured, and unstructured data from multiple sources (APIs, files, databases).

  • Build and maintain REST APIs for data access and integration, including secure authentication and performance optimization.

  • Collaborate with data architects, analysts, and business stakeholders in a global, agile team environment to deliver high-quality data solutions.

  • Ensure compliance with AWS security best practices, including IAM roles, encryption, and data governance policies.

  • Set up and manage CI/CD pipelines, orchestration, and workflow automation using Airflow, Glue, or similar tools.

  • Implement robust monitoring, logging, and alerting to ensure reliability and observability across data pipelines.
  • Required Skills & Experience
  • + years of experience as a Data Engineer or similar role, with a strong background in cloud-native data platforms.

  • Deep expertise in AWS (networking, compute, storage, IAM, CloudWatch, security, cost optimization).

  • Strong proficiency in Snowflake – data modeling, performance tuning, and external integrations.

  • Advanced skills in Python and Spark for large-scale data transformation and processing.

  • Proven experience with API development (REST, authentication, scaling, and automation).

  • Hands-on experience with CI/CD pipelines and orchestration tools (AWS Glue, Airflow, Step Functions, or similar).

  • Strong understanding of monitoring and logging frameworks for data pipelines and applications.

  • Experience working in a global, agile, cross-functional environment, collaborating with distributed teams.
  • Preferred Qualifications
  • Experience with infrastructure as code (Terraform, CloudFormation).

  • Familiarity with modern data governance frameworks and data cataloging tools.

  • Exposure to real-time data streaming (Kinesis, Kafka) is a plus.

  • Experience integrating AI/ML or analytics platforms on Snowflake or AWS.
  • Why Join
  • Be part of a high-impact, global data platform initiative (POP Platform).

  • Opportunity to work with cutting-edge cloud and data technologies in a collaborative environment.

  • Contract extensions possible based on performance and project needs.



  • Our benefits package includes:
  • Comprehensive medical benefits

  • Competitive pay

  • (k) retirement plan

  • …and much more!

  • Apply for this Position

    Ready to join ? Click the button below to submit your application.

    Submit Application