Job Description
Location: Downtown Toronto
Hybrid: 4 days in office
Ready to build what powers the next generation of AI?
We’re looking for a Staff LLMOps Engineer to lead the design, deployment, and optimization of large language model (LLM) infrastructure on the cloud.
You’ll be the driving force behind taking trained models from lab to production—scaling efficiently across multi-GPU clusters and pushing the boundaries of inference performance for enterprise-grade AI applications.
If you thrive at the intersection of AI, cloud engineering, and systems optimization , this is your chance to shape the future of large-scale model serving in a high-impact environment.
What You’ll Do
Architect and operationalize LLM deployment pipelines on AWS and Kubernetes/EKS.
Build and scale multi-GPU inference infrastructure for low latency, high availability, and cost efficie...
Apply for this Position
Ready to join TEEMA Solutions Group? Click the button below to submit your application.
Submit Application