Job Description

Greetings from TCS!

Job Title: IBM Cloud Pak for Data Engineer (Azure / OpenShift / Kubernetes)

Required Skillset: IBM, Cloud Pak, Azure, Kubernetes, OpenShift

Location: PAN INDIA

Experience Range: 5+ years


Job Description:


MUST HAVE:


  • Hands-on experience with CP4D versions, including installation, configuration, upgrading components, and managing clusters, services, and user access controls in both on-premises and cloud environments
  • Solid understanding of container platforms, specifically Kubernetes and Red Hat OpenShift, as CP4D runs natively on this architecture
  • Expertise in designing and implementing scalable ETL/ELT pipelines using tools available within CP4D and open-source frameworks
  • Understanding data security principles, PII data security, data lineage, and utilizing tools like IBM OpenPages for governance and compliance
  • Ability to troubleshoot complex data issues, system bottlenecks, and performance problems, often requiring analytical and critical thinking skills
  • Ability to understand client needs and deliver solutions that provide tangible business value and strategic insight


Responsibility of / Expectations from the Role:


  • Installing, configuring, and upgrading CP4D components (such as DataStage, Watson Knowledge Catalog, and Watson Machine Learning) in cloud (AWS, Azure, IBM Cloud) or on-premises environments, often leveraging Red Hat OpenShift and Kubernetes for container orchestration
  • Designing and building robust Extract, Transform, Load (ETL) or Extract, Load, transform (ELT) pipelines to ingest, transform, and manage data from various disparate sources into a unified platform
  • Utilizing CP4D tools like Data Virtualization to break down data silos and integrate data from across the enterprise without physical movement, ensuring a single source of truth for all users
  • Supporting data scientists and analysts by preparing data for model building, deploying and managing machine learning models, and enabling visualization and reporting
  • Implementing and enforcing data governance policies, lineage tracking, and metadata management using tools like Watson Knowledge Catalog to ensure compliance with data privacy regulations and security standards
  • Monitoring the performance, scalability, and reliability of the CP4D services and underlying infrastructure, identifying bottlenecks, and troubleshooting issues related to the platform or data pipelines
  • Developing software build and automation scripts using languages and tools such as Python, Bash, Ansible, or Terraform to streamline deployment and operational tasks in a DevOps/CI/CD environment
  • Working within agile, cross-functional teams to understand requirements, propose solutions, and create comprehensive technical documentation and standard operating procedures (SOPs)


Good to have:

  • Strong hands-on experience with IBM Cloud Pak for Data and its integrated services
  • Solid understanding of Kubernetes and Red Hat OpenShift environments
  • Experience with cloud platforms (AWS, Azure, IBM Cloud) and related services
  • Familiarity with big data technologies like Apache Spark or Kafka is a plus
  • Strong problem-solving, analytical, and communication skills





Thanks & Regards,

Ria Aarthi A.

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application