Job Description
Cloud & AI Backend Engineer (Gen AI / LLMOps)
Own Cloud | Build and Scale AI Systems
Location: Onsite / Hybrid (India)
About Us Bid Wiser a next-gen, AI-powered Saa S platform that is revolutionizing how engineering companies handle tenders and proposals. Our team is passionate about blending cloud-native architecture, Generative AI, and LLMOps into scalable, production-ready tools. We operate in a large, under-digitized multi-billion-
dollar market with strong early traction.
Role Overview
We are hiring a Cloud & AI Backend Engineer (Gen AI / LLMOps) to own Bidwiser’s cloud infrastructure and AI backend end-to-end. This is a hands-on, ownership-driven role not limited to Dev Ops or ML research. You will manage AWS and Azure deployments, optimize cost and performance, build and scale LLM-powered systems, and collaborate closely with founders and product teams as we grow.
What You’ll Own
End-to-end ownership of AWS and Azure infrastructure
AWS: ECS, EC2, Lambda, S3, Cloud Watch, IAM, VPC
Azure equivalents where applicable
Manage and improve CI/CD pipelines (Git Hub Actions, infra automation)
Monitor, optimize, and control cloud costs proactively
Own production deployments, uptime, reliability, and incident resolution
Build, deploy, and scale AI backend services used by real customers
Develop and maintain LLM-powered features across the product
Design and optimize RAG pipelines , embeddings, vector search, and inference layers
Fine-tune models (Lo RA / adapters / prompt tuning where applicable)
Collaborate closely with frontend, product, and founders
Support rapid experimentation and production rollout of AI features
What We’re Looking For
2–3 years of relevant work experience in Cloud and AI Systems.
Strong hands-on experience with AWS cloud services
Working knowledge of Azure or strong willingness to own it
Experience managing production Saa S infrastructure
Proven experience deploying LLMs or ML models in production
Strong backend engineering skills ( Python preferred )
Experience with Fast API / Flask or similar frameworks
Understanding of LLMOps / MLOps / AI system design
Experience with vector databases and AI pipelines
Ability to debug production issues and own outcomes end-to-end
Startup Mindset & Ownership: Thrives in a fast-paced startup environment, comfortable wearing multiple hats, takes full ownership of outcomes, is willing to stretch beyond fixed hours when needed, enjoys hands-on problem solving, and is genuinely interested in growing long-term with the company while building core systems from the ground up.
What We Offer
Opportunity to own the entire cloud and AI backend stack
Work on real-world Gen AI systems used by enterprise customers
Direct collaboration with founders and leadership
Steep learning curve across cloud, AI, and distributed systems
ESOPs for long-term contributors
Learn from the best in the industry
Apply on Linkedin and submit your resumes here:
Own Cloud | Build and Scale AI Systems
Location: Onsite / Hybrid (India)
About Us Bid Wiser a next-gen, AI-powered Saa S platform that is revolutionizing how engineering companies handle tenders and proposals. Our team is passionate about blending cloud-native architecture, Generative AI, and LLMOps into scalable, production-ready tools. We operate in a large, under-digitized multi-billion-
dollar market with strong early traction.
Role Overview
We are hiring a Cloud & AI Backend Engineer (Gen AI / LLMOps) to own Bidwiser’s cloud infrastructure and AI backend end-to-end. This is a hands-on, ownership-driven role not limited to Dev Ops or ML research. You will manage AWS and Azure deployments, optimize cost and performance, build and scale LLM-powered systems, and collaborate closely with founders and product teams as we grow.
What You’ll Own
End-to-end ownership of AWS and Azure infrastructure
AWS: ECS, EC2, Lambda, S3, Cloud Watch, IAM, VPC
Azure equivalents where applicable
Manage and improve CI/CD pipelines (Git Hub Actions, infra automation)
Monitor, optimize, and control cloud costs proactively
Own production deployments, uptime, reliability, and incident resolution
Build, deploy, and scale AI backend services used by real customers
Develop and maintain LLM-powered features across the product
Design and optimize RAG pipelines , embeddings, vector search, and inference layers
Fine-tune models (Lo RA / adapters / prompt tuning where applicable)
Collaborate closely with frontend, product, and founders
Support rapid experimentation and production rollout of AI features
What We’re Looking For
2–3 years of relevant work experience in Cloud and AI Systems.
Strong hands-on experience with AWS cloud services
Working knowledge of Azure or strong willingness to own it
Experience managing production Saa S infrastructure
Proven experience deploying LLMs or ML models in production
Strong backend engineering skills ( Python preferred )
Experience with Fast API / Flask or similar frameworks
Understanding of LLMOps / MLOps / AI system design
Experience with vector databases and AI pipelines
Ability to debug production issues and own outcomes end-to-end
Startup Mindset & Ownership: Thrives in a fast-paced startup environment, comfortable wearing multiple hats, takes full ownership of outcomes, is willing to stretch beyond fixed hours when needed, enjoys hands-on problem solving, and is genuinely interested in growing long-term with the company while building core systems from the ground up.
What We Offer
Opportunity to own the entire cloud and AI backend stack
Work on real-world Gen AI systems used by enterprise customers
Direct collaboration with founders and leadership
Steep learning curve across cloud, AI, and distributed systems
ESOPs for long-term contributors
Learn from the best in the industry
Apply on Linkedin and submit your resumes here:
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application