Job Description
**Key Responsibilities**:
- Design, develop, and implement data pipelines and ETL processes using Python and Databricks and/or IICS(Informatica).
- Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet business needs.
- Optimize existing data workflows for performance and scalability.
- Monitor and troubleshoot data pipeline issues, ensuring timely resolution.
- Document data processes, workflows, and technical specifications.
- Stay updated with the latest industry trends and technologies related to data engineering and cloud services.
**Qualifications**:
- Proven experience in Python programming and data manipulation.
- Mid-level knowledge of Databricks and its ecosystem (Spark, Delta Lake, etc.).
- Experience with IICS or similar cloud-based data integration tools.
- Familiarity with SQL and database management systems (e.g., PostgreSQL, MySQL, etc.).
- Understanding of data warehousing conce...
- Design, develop, and implement data pipelines and ETL processes using Python and Databricks and/or IICS(Informatica).
- Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet business needs.
- Optimize existing data workflows for performance and scalability.
- Monitor and troubleshoot data pipeline issues, ensuring timely resolution.
- Document data processes, workflows, and technical specifications.
- Stay updated with the latest industry trends and technologies related to data engineering and cloud services.
**Qualifications**:
- Proven experience in Python programming and data manipulation.
- Mid-level knowledge of Databricks and its ecosystem (Spark, Delta Lake, etc.).
- Experience with IICS or similar cloud-based data integration tools.
- Familiarity with SQL and database management systems (e.g., PostgreSQL, MySQL, etc.).
- Understanding of data warehousing conce...
Apply for this Position
Ready to join Chubb? Click the button below to submit your application.
Submit Application