Job Description
DRIVE YOUR FUTURE
The Software Data Engineer will be involved in the Data Platform project, with the goal of developing data ingestion processes through Kafka and integrating all manufacturing systems present in the plants and will be part of the AMS team, responsible for handling user-reported issues worldwide and addressing improvement requests related to the ingestion process.
They will focus on analyzing and implementing big data solutions, particularly in a cloud environment. The projects will address digital transformation, transitioning from traditional and legacy solutions to data management and analysis aligned with a data-driven strategy. The candidate will collaborate with the Milan team to share requirements, design, and model loading chains.
Responsibilities include documenting developments, conducting integration tests and deployments in the AWS environment, and managing the handover of the implemented solution to the support team for governance.
WHY JOIN US
We are offering a rewarding role with scope for career progression along with:
*Ticket restaurant.
*Flexible working hours.
*Hybrid Remote Working.
*Employee benefits.
MAIN ACTIVITIES
*Develop pipelines for loading factory systems into the data platform.
*Performance tuning.
*Write PL/SQL code.
*Draft and maintain technical documentation for ETL implementations.
*Maintain and intervene on existing ETL workflows.
*Interface with business sectors to gather requirements.
*Set up alert and monitoring systems using CloudWatch and Python.
WHAT WE ARE LOOKING FOR
*Degree in technical/scientific disciplines.
*It is essential to have in-depth knowledge of SQL, including performance analysis and execution plans, as well as techniques for optimizing database reads and writes (such as indexes, partitioning keys, etc.).
*In-depth knowledge of S3, Aurora, RDS, and Postgres for database configuration and management in data loading processes.
*Experience in programming with Python, Spark, Node.js, java, spring framework,Angular, Scala.
*Experience in cloud environments, preferably AWS (MSK, Kafka).
*Experience in designing, developing, and managing scalable pipelines (ETL).
*Analytical skills for handling large data volumes (Big Data).
*Knowledge of data integration and data streaming topics, with experience in at least two tools/frameworks (e.g., Spark, Apache Beam, Kafka, Databricks) for at least one year.
*Familiarity with popular big data architectures and technologies (e.g., Hadoop, MapReduce, HBase, Oozie, Hive, Flume, MongoDB, Cassandra, Pig).
WHAT PUTS YOU IN POLE POSITION
*Curiosity, interest of learning new technologies/new product/features.
*Strong communication skills to collaborate with cross-functional teams and stakeholders.
*Detail-oriented with good organizational and time management skills.
APPLY NOW, DRIVE YOUR FUTURE!
#WePirelli
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application