Responsible AI Rules for Cybersecurity Research
A short, strict code of conduct for anyone using AI assistance in security research.
These rules are not optional. They are the floor, not the ceiling.
- Defensive-only use. AI assistance is for finding and fixing vulnerabilities in systems you own or are explicitly authorized to test. - No exploitation of third-party systems. - No credential theft, password cracking, or session hijacking. - No malware authoring, packing, or evasion assistance. - No unauthorized scanning of networks, services, or accounts. - Coordinated vulnerability disclosure with reasonable embargo windows. - Human expert review of every finding before it leaves your team. - Clear documentation of model use in your security artifacts.
A research community that holds this line earns continued access to powerful tools. A community that does not, loses them.
How Organizations Should Prepare for AI-Assisted Vulnerability Discovery
Practical, defensive guidance for security leaders preparing for a world where AI accelerates vulnerability discovery.
Why Claude Mythos Matters for Defensive Cybersecurity
Advanced AI models can help defenders find hidden bugs and prioritize fixes — but only with disciplined human oversight.
What Is Project Glasswing?
A plain-language overview of the defensive AI cybersecurity initiative inspired by the public discussion around Claude Mythos Preview.