
GenAI Security Researcher · AI Red Teamer · Offensive Security Writer

I'm Kai Aizen — independent security researcher focused on adversarial AI, LLM red teaming, and the intersection of social engineering and prompt injection. I build frameworks and tooling for structured AI safety testing.
Creator of AATMF · Author of Adversarial Minds · 6+ CVEs in NVD · Hakin9 Contributing Author
| Project |
Description |
|
| AATMF v3.1 |
Adversarial AI Threat Modeling Framework — 20 tactics, ~240 techniques. Maps to OWASP LLM Top-10, NIST AI RMF, MITRE ATLAS. |
 |
| AATMF Red Teaming Toolkit |
Python CLI for systematic LLM safety testing — three-layer eval pipeline, defense fingerprinting, decay tracking, attack chain planning. |
 |
| LLM Red Teamer's Playbook |
Diagnostic methodology for bypassing LLM defense layers — input filters → alignment → identity → output → agentic trust. |
|
| Project |
Description |
| ChatGPT-DNS-Exfill |
DNS exfiltration via ChatGPT Canvas — rendered content triggers DNS lookups without HTTP requests. |
| chatgpt-rce-dns |
DNS exfiltration and Python Pickle RCE attack chains in AI code execution sandboxes. |
| Tool |
Description |
| Burp MCP Toolkit |
MCP security analysis for Burp Suite — prompt injection and tool poisoning testing via Model Context Protocol. |
| SnailHunter |
AI-powered bug bounty automation — LLM analysis combined with traditional security scanning. |
| KubeRoast |
Red-team Kubernetes misconfiguration and attack-path scanner. |
| Xposure |
Autonomous credential intelligence platform for attack surface recon. |
| SnailSploit Recon |
Chrome MV3 extension for passive recon and bug bounty automation. |
| ZenFlood |
Low-bandwidth stress testing — modernized Slowloris. |