Job Description

Job Role : -AWS Data Engineer


Job Location : -Pune/Hyderabad/Chennai/Mysuru/Bhubaneswar/Mangalore/Trivandrum/Chandigarh/Jaipur/Nagpur/Indore/Gurgaon


Experience : - 7+ Years


Job Roles & Responsibilities: -


  • Design, develop, and maintain data pipelines and assets on AWS.
  • Optimize and refactor legacy PySpark / Spark SQL code for performance and maintainability.
  • Implement unit testing / TDD to ensure robust, bug-free code.
  • Debug and resolve complex performance, concurrency, and logic issues .
  • Manage code versioning and repositories (Git, JFrog Artifactory).
  • Leverage AWS services (S3, EC2, Lambda, Redshift, CloudFormation) for scalable data solutions.


Job Skills & Requirements: -


  • 7+ years hands-on experience in Python, PySpark, Boto3 , and related frameworks/libraries.
  • Proven expertise in Spark SQL & PySpark optimization .
  • Strong knowledge of AWS architecture (S3, EC2, Lambda, Redshift, CloudFormation).
  • Experience in code refactorization for clean, maintainable solutions.
  • Familiarity with Git, JFrog Artifactory , and modern CI/CD practices.
  • Strong debugging and problem-solving skills.
  • Solid understanding of unit testing and TDD methodologies .


Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application