Job Description

Description

& SummaryA career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge.

Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers.

Responsibilities:
● Design, develop, and optimize data pipelines and ETL processes using PySpark or
Scala to extract, transform, and load large volumes of structured and unstructured data
from diverse sources.
● Implement data ingestion, processing, and storage solutions on Azure cloud platform,
leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure
Synapse Analytics.
● Develop and maintain data models, schemas, and metadata to support efficient data
access, query performance, and analytics requirements.
● Monitor pipeline performance, troubleshoot issues, and optimize data processing
workflows for scalability, reliability, and cost-effectiveness.
● Implement data security and compliance measures to protect sensitive information and
ensure regulatory compliance.
Requirement
● Proven experience as a Data Engineer, with expertise in building and optimizing data
pipelines using PySpark, Scala, and Apache Spark.
● Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure
services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics,
and Azure SQL Database.
● Strong programming skills in Python and Scala, with experience in software
development, version control, and CI/CD practices.
● Familiarity with data warehousing concepts, dimensional modeling, and relational
databases (e.g., SQL Server, PostgreSQL, MySQL).
● Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a
plus.

Mandatory skill sets- Pyspark, Azure
Year of experience required- 4 - 8
Qualifications- B.E / B.Tech/ MBA

Education

Degrees/Field of Study required:Degrees/Field of Study preferred:

Certifications

Required Skills

Microsoft Azure, PySpark

Optional Skills

Desired Languages

Travel Requirements

Available for Work Visa Sponsorship?

Government Clearance Required?

Job Posting End Date

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application