Job Description
Job Description
:
Key Responsibilities: Design, build, and optimize ETL pipelines using Python, PySpark, and Spark Develop scalable data solutions leveraging Databricks, AWS Glue, EMR, and S3 Collaborate with cross-functional engineering and analytics teams to implement best practices in data ingestion, transformation, and storage Support data quality, performance tuning, and process automation across the data lifecycle Work in Agile environments with CI/CD and version control tools Required Skills and Experience: 3 to 7 plus years of experience in data engineering, preferably in cloud-based environments Strong proficiency in Python, PySpark, Spark, and SQL Hands-on experience with AWS data services (S3, Glue, EMR, Redshift, Lambda, Athena) Experience with Databricks or equivalent data lake platforms Familiarity with modern DevOps practices (Git, Jenkins, Terraform, A...
Apply for this Position
Ready to join InterSources? Click the button below to submit your application.
Submit Application