Job Description

Job Description

Location:

Chennai, TN

About The Role:

The Global Data Engineering team have a wide range of responsibilities and play a critical role in shaping how Condé Nast enables its business using data. The team is responsible for building data pipelines, data products and tools that enable our Data Scientists, Analysts in various business units, Business Intelligence Engineers and Executives to solve challenging use cases in our industry.

We are seeking a Data Engineer who will build and maintain data pipelines across business areas such as consumer revenue, video, clickstream, commerce, social and ad revenue within Condé Nast. If you are looking for a challenging environment and to work with a world class team of data engineers in a well balanced environment and seasoned company, come join us:

RESPONSIBILITIES

Responsibilities include, but are not limited to:

  • Work with team members to enable them to fullfill the technical needs and own the responsibility of development, unit testing of data-products, analytics, and data engineering needs.

  • Build efficient code to transform raw data into datasets for analysis, reporting and data models

  • Collaborate with other data engineers to implement a shared technical vision

  • Participate in the entire software development lifecycle, from concept to release

  • Discuss with PO team, create JIRA tickets wherever necessary and update work logs properly

  • MINIMUM QUALIFICATIONS

  • Applicants should have a degree (B.S. or higher) in Computer Science or a related discipline or relevant professional experience

  • 3+ years of software development experience designing scalable & automated software systemsand strong theoretical knowledge is mandatory

  • Proficiency in Python/PySpark coding. Data structures and algorithms using python are preferred.

  • Proficiency in SQL

  • Experience with data processing frameworks such as Spark, Flink, or Beam

  • UnderExperience in cloud-based infrastructures such as AWS or GCP

  • Exposure to orchestration platforms such as Airflow or Kubeflow

  • Proven attention to detail, critical thinking, and the ability to work independently within a cross-functional team

  • Basic understanding of code versioning tools such as GitHub, SVN, CVS etc.,

  • Understanding of Agile framework and delivery

  • Experience in any one of the ETL tool like Informatica, Talend ETL, PentahoDI would be a plus

  • Conceptual knowledge on cloud based Distributed/Hybrid data-warehousing solutIons and Data Lake knowledge would be a plus

  • Apply for this Position

    Ready to join ? Click the button below to submit your application.

    Submit Application