Job Description
Greetings from TCS!
We are looking for - Data Engineer(Python, Databricks, SQL)
Experience: 6-8 years
Location: Bengaluru
Desired Competencies-
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal ingestion, transformation, and Publishing of data from a wide variety of data sources using Python/Spark and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems
Experience Requirements: -
- 6 Yrs + of IT Experience & 4 Yrs+ Experience in building Data Applications
- Advanced working SQL, Python/PySpark knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience of working in Databricks
- Experience building and optimizing ‘Cloud big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of understanding, processing and extracting value from large, disconnected datasets.
- Understating of various data sets Structured ,Semi Structured ,Data at rest /Motion
- Experience in Data Modeling
- Experience supporting and working with cross-functional teams in a dynamic environment. and knowledge of one or more from the below
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Apache NiFi,AWS Step Function ,Oozie, Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: AWS DMS, Kinesis, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application