Job Description

Role Overview

LILT is seeking freelance AI Red Team experts to collaborate on projects focused on adversarial testing of AI systems — LLMs, multimodal models, inference services, RAG/embeddings, and product integrations. Your work will involve crafting prompts and scenarios to test model guardrails, exploring creative ways to bypass restrictions, and systematically documenting outcomes. You’ll think like an adversary to uncover weaknesses, while collaborating with engineers and safety researchers to share findings and improve system defenses.

Key Criteria
  • Deep Understanding of Generative AI and main models, including their underlying architectures, training processes, and potential failure modes. This includes knowledge of concepts like prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF)
  • Cybersecurity & Threat Modeling: Experience in cybersecurity principles, including threat modeling, vulnerability assessmen...

Apply for this Position

Ready to join LILT? Click the button below to submit your application.

Submit Application