Job Description

TCS is inviting applications!!!
Role: AWS Data Engineer
EXP: 8 - 12 YEARS
LOCATION: Bangalore/Hyderabad
Job Description
Data Engineer with E3 expertise in Spark, java and AWS who is responsible for designing, building, and maintaining Scalable and robust data pipelines and infrastructure within the AWS cloud environment. Leveraging Spark for efficient data processing and using Java for development within the data ecosystem.
Responsibilities:
- Design, develop and maintain scalable and robust data pipelines using Apache Spark and Java on AWS
- Implement ETL/ELT processes to ingest, transform, and load large datasets from various sources into data warehouse on AWS(e.g S3, Redshift)
- Develop and Optimize Spark applications using Java to process and analyze large Java to process and analyze large volume of data
- Design and implement data models for efficient storage and retrieval in AWS services like Amazon Redshift, Dynamo DB, or Aurora
- Utilize various AWS services (e.g, EMR, Glue, Lambda, S3, Redshift, Kinesis) to build and manage data solutions.
- Ensure data security, compliance and cost effectiveness of data solutions within the AWS ecosystem.
- Monitor and troubleshoot data pipelines and AWS resources to ensure optimal performance and reliability.
Technical Expertise expectations:
1, Strong proficiency in Apache Spark for big data processing, including Spark SQL, Data Frames, and RDDs.
2. Solid experience in Java programming for developing data-intensive applications and Spark Jobs.
3. Extensive experience with AWS cloud services for data engineering (e.g S3, EMR, Glue, Redshift, Lambda, Kinesis)
4. Proficiency in SQL for data manipulation and querying
5. Experience with data warehousing concepts and technologies
6. Experience/Knowledge in Apache Kafka for real-time data streaming.

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application