#1
★ Gold
Technical analysis connecting Microsoft's Whisper Leak research — showing attackers can infer LLM query topics from encrypted traffic metadata (packet timing, size, sequence) without breaking cryptography — with a McKinsey incident where an autonomous agent exploited internal endpoints and SQL injection at machine speed. Both demonstrate that AI systems are inherently observable through traffic patterns and agents compress exploitation timelines from days to minutes.
#3
★ Gold
LiteLLM v1.82.8 on PyPI was infected with malware that harvested SSH keys, cloud credentials, and secrets on Python startup, then attempted lateral movement across Kubernetes clusters. The library handles 97 million monthly downloads and is core infrastructure for agent-to-LLM communication across the ecosystem.
#4
★ Gold
Novee debuted at RSAC 2026 with an autonomous red-teaming platform that chains adversarial attack techniques against AI applications. Founded by national-level offensive security leaders, the agent gathers context on targets, builds behavioral models, and simulates multi-step attacks. It discovered a critical Cursor RCE vulnerability. $51.5M raised in 4 months.
#7
★ Silver
OpenAI announced a public Safety Bug Bounty on Bugcrowd offering up to $20K per report for AI-specific vulnerabilities — agentic prompt injection, MCP exploits, proprietary information exposure, and platform integrity bypasses. This is the first major safety-focused (not just security-focused) bounty program for LLM systems.
#11
CL-STA-1087, a sophisticated espionage operation, targeted Southeast Asian military organizations since 2020 using custom backdoors (AppleChris, MemFun), Mimikatz variants, dead drop resolvers via Pastebin, reflective DLL loading, memory-only execution, and deliberate 6-hour sleep intervals between commands. Operations align with UTC+8 timezone and Chinese cloud services.