Curriculum Map
Master AI security from fundamentals to certification. Track your progress across OWASP LLM Top 10, MITRE ATLAS, and NIST AI RMF.
1) OWASP LLM Top 10 Coverage Map
View All LabsLLM01: Prompt Injection
177 labsManipulating prompts to override intended model behavior.
B/I/A/E: 16/45/52/64
LLM02: Insecure Output Handling
23 labsUnsafe model outputs causing downstream system risk.
B/I/A/E: 4/9/7/3
LLM03: Training Data Poisoning
29 labsCompromised training data shaping malicious model behavior.
B/I/A/E: 6/9/7/7
LLM04: Model Denial of Service
6 labsResource exhaustion attacks that degrade availability.
B/I/A/E: 1/0/3/2
LLM05: Supply Chain Vulnerabilities
27 labsWeaknesses in models, datasets, or dependencies.
B/I/A/E: 2/5/16/4
LLM06: Sensitive Information Disclosure
31 labsLeakage of secrets, PII, or confidential context.
B/I/A/E: 4/6/11/10
LLM07: Insecure Plugin Design
36 labsUnsafe tool/plugin integrations enabling abuse.
B/I/A/E: 2/6/26/2
LLM08: Excessive Agency
40 labsOver-privileged autonomous actions without safeguards.
B/I/A/E: 7/5/22/6
LLM09: Overreliance
12 labsUnsafe trust in model outputs without verification.
B/I/A/E: 2/4/4/2
LLM10: Model Theft
18 labsStealing model weights, behavior, or proprietary capability.
B/I/A/E: 3/6/5/4
2) Learning Pathway
Track 0: Prerequisites
6 lessonsLevel 0: AI Security Basics
6 lessonsBeginner Labs (Free)
47 labsIntermediate Labs (Pro)
3) Framework Alignment
Last updated: March 2026
Last updated: March 2026
4) MITRE ATLAS Coverage
Labs mapped to the MITRE ATLAS framework โ adversarial tactics and techniques for AI/ML systems.
Reconnaissance
4 labsResource Development
4 labsInitial Access
134 labsML Attack Staging
83 labsExecution
10 labsPersistence
5 labs5) NIST AI RMF
AI Risk Management Framework โ organized by core functions
6) Real-World Case Studies
Notable AI security incidents mapped to ATLAS techniques. Practice the same attack vectors in our labs.
Samsung ChatGPT Data Leak
Engineers pasted proprietary source code into ChatGPT, leaking confidential semiconductor data to an external AI service.
Practice This Attack โMicrosoft Tay Chatbot
Users manipulated Tay via coordinated adversarial inputs, causing the chatbot to output offensive content within 24 hours of launch.
Practice This Attack โBing Chat Sydney Jailbreak
Security researchers extracted the hidden system prompt and induced persona switches in Bing Chat, revealing internal instructions.
Practice This Attack โ