Job Description
Acuity Analytics is hiring multiple 'Senior Data Engineers ' for its Data and Technology Services Team at Bengaluru, Gurgaon and Pune (hybrid).
Core skills: Advanced proficiency in Python and SQL (covering both core programming and data manipulation), Messaging systems (either IBM MQ, Apache Kafka, RabbitMQ, ActiveMQ or similar), hands-on experience with cloud-native platform - Azure, and strong knowledge of Databricks (including core data engineering concepts) ; we need people having strong experience of working with data - from files, real-time data streams using Python
'Immediate Joiners are highly preferred! ' (0-30 days)
Mandatory Requirements:
- Candidates should have a Bachelors or master’s in science or engineering disciplines (Computer Science, Engineering, Math's, Physics, etc.) or a related field.
- 7+ years of experience in software development with a focus on data projects using Strong Python Coding skills, PySpark and associated frameworks.
- Hands-on SQL coding skills with RDMS or NoSQL databases.
- Hands-on experience with IBM MQ administration in enterprise environments.
- Strong expertise in: MQ clustering & distributed messaging, MQ object management, SSL/TLS configuration, Messaging patterns (pub/sub, point-to-point, request/response).
- Proven experience as a Data Engineer with experience in Azure cloud. Experience implementing solutions using Azure cloud services, Azure Data Factory, Azure Lake Gen 2, Azure Databases, Azure Data Fabric, API Gateway management, Azure Functions.
- Experience with developing APIs using FastAPI or similar frameworks in Python. Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes. Good understanding of ETL/ELT processes.
- Assist stakeholders with data-related technical issues and support their data infrastructure needs. Develop and maintain documentation for data pipeline architecture, development processes, and data governance.
- In-depth knowledge of data warehousing concepts, architecture, and implementation. Extremely strong organizational and analytical skills with strong attention to detail.
- Strong communication and interpersonal skills, with the ability to effectively engage with both technical and non-technical stakeholders.
Key Responsibilities:
- Interpret business requirements and work with internal resources as well as application vendors.
- Design, develop and maintain Data Bricks solutions and relevant data quality rules. Troubleshoot and resolve data-related issues.
- Configure and create data models and data quality rules to meet customer needs. Handle multiple database platforms like Microsoft SQL Server and Oracle.
- Review and analyze data from multiple internal and external sources. Analyze existing Python/ PySpark complex code and identify areas for optimization. Write new optimized SQL queries or Python scripts to improve performance and reduce time.
- Write clean, efficient, and well-documented code that adheres to best practices and Council IT coding standards. Maintain and operate existing custom code processes.
- Query writing skills with the ability to understand and implement changes to SQL functions and stored procedures.
- Deliver results under demanding timelines to real-world business problems. Work independently and multi-task effectively.
- Configure system settings and options and execute unit/integration testing. Develop end-user release notes, training materials, and deliver training to a broad user base.
- Identify and communicate areas for improvement. Responsible for quality checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT).
Interested ones, please share your updated CVs at
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application