Job Description

Job Summary


We are looking for a talented and forward thinking GenAI Security professional to join our team. This role focuses on implementing security guardrails, performing adversarial testing and ensuring secure design, deployment, and runtime protection of GenAI-powered applications

The ideal candidate will bring expertise in leveraging new technologies to enhance threat detection, data protection and predict vulnerabilities across AI powered application


Key Responsibilities


Security Engineering:

  • Work alongside development teams to integrate security controls into GenAI agent frameworks, model interfaces and supporting infrastructure.
  • Evaluate GenAI applications through scenario-based testing, abuse case analysis, and adversarial simulations to identify weaknesses unique to AI-driven systems.
  • Build and maintain technical controls for secure execution of AI agents, including monitoring, logging, and misuse prevention mechanisms.


Compliance and Governance:

  • Contribute to the development of governance frameworks ISO/IEC 42001:2023 for GenAI usage, ensuring alignment with internal policies and external compliance requirements.
  • Support compliance reviews and internal assessments by maintaining documentation, control evidence and security posture reports for GenAI deployments.
  • Enable responsible AI adoption by conducting knowledge-sharing sessions and creating guidance material for engineering and security teams.


AI Data Protection and Controls:

  • Partner with product, platform and engineering teams to define, design and continuously enforce robust data security controls across GenAI systems, ensuring secure handling of prompts, model responses, training datasets, and derived outputs throughout the application lifecycle.
  • Continuously monitor, analyze, and assess GenAI interactions using telemetry, logging, and usage analytics to detect security gaps, misuse patterns and emerging threats and communicate actionable risk insights, remediation strategies and trends to engineering, product and leadership stakeholders


Collaboration and Reporting:

  • Work Closely with cross-functional terms to embed AI powered security practices in development pipelines, system architecture. Provide detailed insights reports on AI/ML- driven security improvements, potential risks and recommended mitigations to the management and stakeholders.
  • Assist in creating and updating security Policies, procedures and standards to ensure they reflect emerging AI/ML technologies and best practices. Conduct training and workshops for other members in the security teams on AI/ML techniques in AI Security and usage.


Required Skills & Experience


  • 5 years of experience in GenAI security, AI application security or AI engineering with security focus .
  • Hands-on experience developing or securing GenAI agents .
  • Strong understanding of LLMs, prompt engineering, RAG, embeddings, and vector databases .
  • Knowledge of GenAI threats such as prompt injection, data exfiltration, hallucinations and model abuse .
  • Experience with Python and APIs for AI/ML systems.
  • Familiarity with cloud platforms (AWS, GCP, or Azure) and AI services (Vertex AI, OpenAI etc.).
  • Understanding of data privacy and compliance requirements (DPDPA, ISO 42001, PCI- DSS, GDPR concepts).

Apply for this Position

Ready to join ? Click the button below to submit your application.

Submit Application