Job Description

Location - Bangalore
Exp Range - 4 to 6 years

Requirements

Requirements and Skills

·       should have a bachelor’s or master’s degree in computer science, Information Technology or other quantitative fields

·       You should have at least 5 years working as a data engineer in supporting large data transformation initiatives related to machine learning, with experience in building and optimizing pipelines and data sets

·       Strong analytic skills related to working with unstructured datasets.

Must-have Programming Skills:

·       Person Should have 5+ strong Informatica Power center ETL Tool knowledge .

·       Good Hands -on experience in SQL, writing analytical queries and windows functions.

·       Good Hands – on experience in creating external tables, partitioning, parquet files.

·       3-5 years of solid experience in Big Data technologies a must.

·       Data Engineering experience using AWS core services ( RedShift)

·       Knowledge of Python and Pyspark is an add one.

Requirements/Skill sets:

·       Knowledge of Database Concepts – Indexing, Partitioning is must.

·       Knowledge of Data warehousing – Normalization, Denormalization, Star/Snow-flake schemas.

·       Good Hands -on the table’s creations, DDL, DML and TCL .

·       Hands on experience in Informatica PowerCenter as an ETL tool.

·       Experience with AWS cloud services: Redshift, S3, Athena and familiarity with various log formats from AWS.

·       Experience with object-oriented/object function scripting languages: Python, Pyspark.

Experience in, Dbeaver tool .

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application