Job Description
<strong>Position: MLOps Engineer<br />
Company: WillWare Technologies<br />
Location: Bangalore<br />
Work Mode: WFO</strong><br />
<br />
<strong>Required Qualifications:</strong><br />
<br />
• <b>Orchestration: Deep experience with Valohai (Preferred), Kubeflow, Airflow, or AWS SageMaker Pipelines.<br />
• Model Lifecycle: Expert-level knowledge of MLflow for tracking experiments and managing model registries.<br />
• Cloud Proficiency: Hands-on experience with both Azure and AWS ecosystems.<br />
• Coding: Strong proficiency in Python and shell scripting.<br />
• Containers: Docker and container orchestration.</b><br />
<br />
<strong>Key Responsibilities:<br />
<br />
MLOps as Code & Orchestration</strong>
<ul> <li>Design and implement MLOps as Code methodologies. pipelines, infrastructure, and configurations must be versioned, reproducible, and automated (GitOps).</li> <li>Manage and optimize deep learning orchestration platforms (specifically Valohai, or similar tools like Kubeflow/SageMaker Pipelines) to automate training, fine-tuning, and deployment workflows.</li> <li>Standardize execution environments using Docker and ensure reproducibility across local, dev, and production environments. </li> <li>Central Registry & Governance</li> <li>Own the Central Model Registry strategy using MLflow. Ensure strict versioning, lineage tracking, and stage transitions (Staging to Prod) for all models.</li> <li>Enforce governance policies for model artifacts, ensuring security and compliance across the model lifecycle.</li> <li>Multi-Cloud Architecture (Azure & AWS)</li> <li>Operate in a hybrid cloud environment. You will leverage Azure (AI Foundry, OpenAI Service) and AWS (SageMaker, Bedrock, EC2/GPU instances) based on workload requirements.</li> <li>Design seamless integrations between cloud storage (S3/Blob), compute, and the orchestration layer.</li> <li>Experience creating custom execution environments for specialized hardware (NVIDIA GPUs, TPUs).</li> <li>CI/CD & Automation</li> <li>Build robust CI/CD pipelines (GitHub Actions/Azure DevOps) that trigger automatic training or deployment based on code or data changes.</li> <li>Automate the 'hand-off' process between Data Scientists and production environments.</li>
</ul>
Company: WillWare Technologies<br />
Location: Bangalore<br />
Work Mode: WFO</strong><br />
<br />
<strong>Required Qualifications:</strong><br />
<br />
• <b>Orchestration: Deep experience with Valohai (Preferred), Kubeflow, Airflow, or AWS SageMaker Pipelines.<br />
• Model Lifecycle: Expert-level knowledge of MLflow for tracking experiments and managing model registries.<br />
• Cloud Proficiency: Hands-on experience with both Azure and AWS ecosystems.<br />
• Coding: Strong proficiency in Python and shell scripting.<br />
• Containers: Docker and container orchestration.</b><br />
<br />
<strong>Key Responsibilities:<br />
<br />
MLOps as Code & Orchestration</strong>
<ul> <li>Design and implement MLOps as Code methodologies. pipelines, infrastructure, and configurations must be versioned, reproducible, and automated (GitOps).</li> <li>Manage and optimize deep learning orchestration platforms (specifically Valohai, or similar tools like Kubeflow/SageMaker Pipelines) to automate training, fine-tuning, and deployment workflows.</li> <li>Standardize execution environments using Docker and ensure reproducibility across local, dev, and production environments. </li> <li>Central Registry & Governance</li> <li>Own the Central Model Registry strategy using MLflow. Ensure strict versioning, lineage tracking, and stage transitions (Staging to Prod) for all models.</li> <li>Enforce governance policies for model artifacts, ensuring security and compliance across the model lifecycle.</li> <li>Multi-Cloud Architecture (Azure & AWS)</li> <li>Operate in a hybrid cloud environment. You will leverage Azure (AI Foundry, OpenAI Service) and AWS (SageMaker, Bedrock, EC2/GPU instances) based on workload requirements.</li> <li>Design seamless integrations between cloud storage (S3/Blob), compute, and the orchestration layer.</li> <li>Experience creating custom execution environments for specialized hardware (NVIDIA GPUs, TPUs).</li> <li>CI/CD & Automation</li> <li>Build robust CI/CD pipelines (GitHub Actions/Azure DevOps) that trigger automatic training or deployment based on code or data changes.</li> <li>Automate the 'hand-off' process between Data Scientists and production environments.</li>
</ul>
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application