Job Description

Job Role : -AWS Data Engineer

Work Mode - Remote

Experience : - 7+ Years


Job Roles & Responsibilities : -


  • Design, build, and optimize data pipelines and scalable data assets.
  • Develop and maintain high-performance code using PySpark/Python with best practices.
  • Perform code optimization using Spark SQL and PySpark for efficiency.
  • Implement code refactoring to modernize legacy codebases for readability and maintainability.
  • Write and maintain unit tests (TDD) to ensure code reliability and reduce bugs.
  • Debug and resolve complex issues including performance, concurrency, and logic flaws.
  • Work with AWS services (S3, EC2, Lambda, Redshift, CloudFormation) to architect and deliver solutions.
  • Manage version control and artifact repositories with Git and JFrog Artifactory.


Job Skills & Requirements : -


  • 7+ years of strong hands-on experience in PySpark, Python, Boto3, and Python frameworks/libraries.
  • Expertise in Spark SQL & PySpark optimization.
  • Solid understanding of AWS architecture and cloud-native solutions.
  • Proficiency in Git repositories & JFrog Artifactory for code versioning.
  • Strong experience in refactoring, debugging, and optimizing legacy and new code.
  • Knowledge of TDD/Unit testing to ensure robust solutions.

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application