Job Description
AI Safety, Robustness & Risk Assessment
Lead adversarial testing, including jailbreak attempts, prompt injection, harmful content generation, system prompt extraction, and agent tool misuse.
Conduct end to end risk assessments for AI driven chatbots and autonomous agent systems, identifying hazards, evaluating exposure, and defining mitigation strategies.
Build and maintain AI safety evaluation pipelines, including red team test suites, scenario-based evaluations, and automated stress testing.
Define and monitor safety KPIs such as harmful output rates, robustness scores, and model resilience metrics.
Analyze failure modes (e.g., hallucinations, deceptive reasoning, unsafe tool execution) and design guardrails to minimize risks.
Technical Development & Collaboration
Develop reproducible experiments for LLM behavior analysis, including prompt engineering, control mechanisms, and guardrail...
Apply for this Position
Ready to join PepsiCo? Click the button below to submit your application.
Submit Application