Job Description
About The Job
At Alignerr, we partner with the world’s leading AI research teams and labs to build and train cutting-edge AI models.
This role focuses on structured adversarial reasoning rather than exploit development. You will work with realistic attack scenarios to model how threats move through systems, where defenses fail, and how risk propagates across modern environments.
Organization : Alignerr Position : Offensive Security Analyst (Structured / Non-Exploit) Type : Contract / Task-Based Compensation : $40–$60 /hour Location : Remote Commitment : 10–40 hours/week
What You’ll Do
- Analyze attack paths, kill chains, and adversary strategies across real-world systems
- Classify weaknesses, misconfigurations, and defensive gaps
- Review red-team style scenarios and intrusion narratives
- Help generate, label, and validate adversarial reasoning data used to train an...
Apply for this Position
Ready to join Alignerr? Click the button below to submit your application.
Submit Application