Job Description

You’ll work on the C++ layer that powers local AI, porting and enhancing inference engines like llama.cpp, ONNX and similar, to run efficiently on Нижних devices. Your focus is on the runtime: making models load faster, run leaner, and perform well across different hardware. You’ll ensure that the inference layer is stable, optimized, and ready for integration with the rest of the stack.

This role is for engineers who want to work close to the metal, enabling private and fast on-device AI without relying on cloud infrastructure.

Responsibilities

  • Work on deploying machine learning models to edge devices using the frameworks: llama.cpp, ggml, ONNX
  • Collaborate closely with researchers to assist in coding, training and transitioning models from research to production environments
  • Integrate AI features into existing products, enriching them with the latest advancements in machine learning

Qualifications=
  • Exc...

Apply for this Position

Ready to join Tether Operations Limited? Click the button below to submit your application.

Submit Application