Job Description

Type: Hourly contract

Compensation: $50–$111 per hour

Location: Remote

Role Responsibilities

  • Conduct adversarial testing of AI models, including jailbreaks, prompt injections, misuse cases, and exploit discovery
  • Generate high-quality human evaluation data by annotating failures, classifying vulnerabilities, and flagging systemic risks
  • Apply structured red-teaming frameworks, taxonomies, benchmarks, and playbooks to ensure consistent testing
  • Produce clear, reproducible documentation such as reports, datasets, and adversarial test cases
  • Support multiple customer projects, ranging from LLM safety testing to socio-technical abuse and misuse analysis
  • Communicate identified risks and vulnerabilities clearly to technical and non-technical stakeholders

Requirements

  • Strong experience in AI red-teaming, adversarial testing, cybersecurity, or soci...

Apply for this Position

Ready to join Crossing Hurdles? Click the button below to submit your application.

Submit Application