Welcome to AI Governance Insider
DeepSeek-V3.2 just outperformed GPT-5 on reasoning benchmarks.
It's also banned in 12 US states, federal agencies, and 6 countries.
The irony isn't lost on anyone: the best-performing AI might be the one you can't use.
But here's what most coverage misses - this isn't just a DeepSeek problem. Enterprise AI has a trust crisis on every front:
Where your AI runs -> DeepSeek's data goes to China
What it leaks -> Copilot had a zero-click exfiltration vulnerability
Who's accountable -> Agentic AI is creating identities faster than you can audit them
This week, we're breaking down all three. Because trusting "enterprise-grade" isn't a security strategy.

The Paradox Nobody Wants to Talk About
DeepSeek went from market darling to banned in 30 days.
The Timeline:
January 2025: DeepSeek launches. Markets lose their minds. Nvidia drops 17%.
Jan 31: Texas becomes the first state to ban it.
Feb 6: Bipartisan federal bill introduced (H.R. 1121).
Feb 10-11: New York and Virginia follow.
March: Nine more states pile on.
January 2026: DeepSeek-V3.2 beats GPT-5 on reasoning benchmarks. Still banned.
Why the Ban:
It's not about the model. It's about where your data goes.
DeepSeek's own privacy policy states data is stored on servers in China. Under China's National Intelligence Law, companies must cooperate with state intelligence services. The code has also been linked to China Mobile - already banned by the FCC.
As Rep. Gottheimer put it: "This is a five alarm national security fire."

The Vulnerability That Needed Zero Clicks
DeepSeek is banned for where data might go. But what about the AI you're already using?
EchoLeak (CVE-2025-32711):
First zero-click AI vulnerability ever discovered
CVSS 9.3 - Critical severity
Affects Microsoft 365 Copilot
How it worked:
Attacker creates an Excel file with hidden white text instructions spread across multiple sheets.
File lands in your M365 tenant. You don't need to open it.
Copilot reads the hidden prompts and executes them.
It exfiltrates corporate emails by encoding them into a Mermaid diagram.
Data leaves through what looks like a harmless "login button."
No clicks. No user interaction. Just Copilot doing what it was designed to do - follow instructions.
The Uncomfortable Truth:
DeepSeek is banned for theoretical data exposure. Copilot had an actual zero-click exfiltration vulnerability sitting in production.
Microsoft patched it in May 2025. But the attack surface hasn't changed: your AI assistant has access to your emails, documents, and calendar. It follows instructions embedded in files. And it doesn't ask permission.
The question isn't whether your AI vendor is "enterprise-grade." It's whether you know what your AI is doing with the access you've given it.

The Identity Problem Nobody Saw Coming
Here's a scenario that's already happening:
An incident investigation asks: "Who owns this service account?"
The answer: "An AI agent created it three days ago. No ticket. No approval queue. No human in the loop."
The Numbers:
43% rise in unexpected AI-driven security incidents (PwC Digital Trust Insights 2025)
40% of enterprise apps will integrate AI agents by end of 2026 (Gartner)
We're not talking about a future problem. Agents are already spinning up identities, accessing systems, and taking actions - faster than security teams can track.
Why CISOs Own This:
Traditional security assumes human actors. Audit trails assume you can distinguish between a user and a tool.
But when an agent acts, who's accountable?
Was it Bob?
Or Bob's AI assistant?
Or Bob's AI acting in a way Bob never intended?
When regulators ask who accessed what, "we're not sure" is a catastrophic answer.
The Accountability Shift:
This isn't an AI governance problem. It's an identity problem.
And CISOs will be held accountable -not for whether AI was deployed safely, but for whether they can prove what it did.
If you can't answer "was that a human or an agent?" with certainty, you have a gap that auditors will find before you do.
Stat of the Week: 12 States, 6 Countries, 1 AI Model
DeepSeek is now banned in 12 US states, 6 countries, and multiple federal agencies -making it the fastest-banned AI in history.
But the stat isn't the point. The pattern is.
Threat | Vector | Lesson |
|---|---|---|
DeepSeek | Data goes to China | Jurisdiction matters |
Copilot | Zero-click exfiltration | Access = attack surface |
Agentic AI | Unauditable identities | Visibility is non-negotiable |
The Common Thread:
Three different threats. One underlying problem: you don't control what you can't see.
Policies don't stop data from crossing borders.
Vendor trust doesn't prevent prompt injection.
Permission models don't track agent-created identities.
Enterprise AI doesn't need more policies. It needs architecture that makes trust verifiable.
"Trust but verify" only works when you can verify.

Governance 101: Data Sovereignty
What it means:
Data sovereignty is the principle that data is subject to the laws of the country where it's stored or processed - not where your company is headquartered.
Why it matters for AI:
Every prompt you send to an AI contains data. That data lands on a server somewhere. And wherever it lands, local laws apply.
China's National Intelligence Law requires companies to cooperate with state intelligence services.
GDPR restricts transfers of EU personal data to non-adequate countries.
US CLOUD Act allows US authorities to compel data from US-based providers, regardless of where the server sits.
Your AI vendor may process data across multiple jurisdictions. Their "enterprise security" doesn't override local law.
The Test:
For every AI tool your organization uses, can you answer: "Which country's laws govern my data?"
If you can't answer that question, you don't control your data. Someone else's government does.

Questions Your AI Vendor Can't Answer
Next time you're evaluating an AI tool - or auditing one you already use - ask these questions. The answers (or lack of them) will tell you everything.
Data Sovereignty:
Where is my data processed?
Where is it stored?
Which jurisdiction's laws apply to my prompts and outputs?
Access Control:
What data can your AI access in my environment?
Can you show me the permission boundaries?
How do you handle document-level restrictions?
Auditability:
Can you distinguish user actions from agent actions in logs?
Can you produce an audit trail for a specific query?
What's your data retention policy?
Red Flags to Watch For:
If your vendor responds with any of these, dig deeper:
"Our security is enterprise-grade" - Meaningless. Enterprise-grade isn't a standard.
"We're SOC 2 compliant" - Good for infrastructure, but doesn't address AI-specific risks like prompt injection or data leakage.
"That's handled by our partner" - Who owns the liability when something goes wrong?
If they can't answer clearly, they haven't thought about it. And that's your risk, not theirs.
The Bottom Line
Three stories. One theme.
DeepSeek is banned not because it's bad -but because of where the data goes.
Copilot had a critical zero-click vulnerability -in AI you're already using.
Agentic AI is creating accountability gaps -that security leaders will own.
The AI you use is only as trustworthy as the architecture it runs on.
Vendor promises don't matter. Compliance certifications don't matter. What matters is: Can you see where your data goes, what your AI does with it, and who's accountable when something breaks?
If the answer is "I'm not sure" -that's the gap to close this quarter.
Building AI workflows for your compliance team? Let's talk: [email protected]
Questions? Reply to this email. I read everything.
Until next week -stay skeptical.
Anson Zeall
Founder, Nexusdesk & Azentiq Nexus Consulting
