Job Description

Job Description: Junior Engineer Hands-On (Agentic-AI) / AI Engineer
Role Details
- Experience: 2–4 years
- Primary Tech/Domain: Agentic AI/Gen AI/Dev Ops/Conv AI/MLOps
- 100% Hands on
Overview & Expectations
Role Summary:
Lead and deliver high-impact initiatives aligned to the AI & Data charter. Own execution excellence with measurable business value, technical depth, and governance.
Key Outcomes (03–06 months):
• Ship production-grade solutions with clear ROI, reliability (SLOs), and security.
• Establish engineering standards, pipelines, and observability for repeatable delivery.
• Build Gen-AI applications
• Mentor talent; uplift team capability through reviews, playbooks, and hands-on guidance.
Responsibilities:
• Translate business problems into well-posed technical specifications and architectures.
• Lead design reviews, prototype quickly, and harden solutions for scale
• Build automated Gen-AI applications and model/data governance across environments.
• Define & track KPIs: accuracy/latency/cost, adoption, and compliance readiness.
• Partner with Product, Security, Compliance, and Ops to land safe-by-default systems.
Technical Skills:
• Tracks: Agentic AI/Gen AI/Dev Ops/Conversational AI/MLOps
• Python + Cloud: Fast API, async IO; AWS/Azure/GCP basics
• Agents & RAG: Lang Chain/Crew AI basics, embeddings, vector DBs
• Dev Ops: Docker, CI, unit/integration tests, logging
• Conversational AI: intents, NLU, dialog management, evaluation
• MLOps foundations: model packaging, simple pipelines, monitoring
• Design multi‑agent architectures using orchestration frameworks (e.g., Lang Chain /Crew AI /Lang Graph) with clear roles, hand‑offs, and acceptance criteria.
• Build event‑driven platforms leveraging cloud messaging (Event Bridge/Event Grid/Pub/Sub), durable state, retries, and idempotency for reliable agent workflows.
• Integrate LLMs as reasoning engines (Azure Open AI / AWS Bedrock / Vertex AI) with tool/function calling, structured outputs (JSON), and guardrails.
• Develop robust tool adapters for agents (search, DB/SQL, vector stores, HTTP APIs, code execution), including error handling, circuit breakers, and fallbacks.
• Implement observability at scale: Tracing of agent steps and LLM calls, metrics (latency, cost per task), logs, and incident playbooks; dashboards for SLOs.
• Harden security & compliance: IAM/RBAC, secrets management (Key Vault/KMS/Secret Manager), PII redaction, audit trails, and policy enforcement.
• Optimize deployment & performance: Containerized microservices on AKS/EKS/GKE, autoscaling, caching/batching, concurrency controls, and cost governance.
Architecture & Tooling Stack:
• Source control & workflow: Git, branching standards, PR reviews, trunk-based delivery.
• Containers & orchestration: Docker, Kubernetes, Helm; secrets, configs, RBAC.
• Observability: logs, metrics, traces; dashboards with alerting & on-call runbooks.
• Data/Model registries: metadata, lineage, versioning; staged promotions.
Performance & Reliability:
• Define SLAs/SLOs for accuracy, tail latency, throughput, and availability.
• Capacity planning with autoscaling; load tests; cache design; graceful degradation.
• Cost controls: instance sizing, spot/reserved strategies, storage tiering.
Qualifications:
• Bachelor’s/Master’s in CS/CE/EE/Data Science or equivalent practical experience.
• Strong applied programming in Python; familiarity with modern data/ML ecosystems.
• Proven track record of shipping and operating systems in production.

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application