Job Description
Experience Level: 3-14 Years
Loc: Bangalore/Hyderabad
Mandatory Skills: Azure Databricks, Pyspark, SQL
Notice Period: Immediate to 20 Days
Role & responsibilities
- Experience in Data warehouse/ETL projects.
- Deep understanding of Star and Snowflake dimensional modelling.
- Strong knowledge of Data Management principles
- Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture
- Should have hands-on experience in SQL, Python and Spark (PySpark)
- Candidate must have experience in AWS/ Azure stack
- Desirable to have ETL with batch and streaming (Kinesis).
- Experience in building ETL / data warehouse transformation processes
- Experience with Apache Kafka for use with streaming data / event-based data
- Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala)
- Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J)
- Experience working with structured and unstructured data including imaging & geospatial data.
- Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT.
- Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application