Your AI compliance roadmap just got rewritten.

On January 1st, two major state AI laws took effect:

- California's Transparency in Frontier AI Act (TFAIA)

- Texas's Responsible AI Governance Act (RAIGA)

Three weeks earlier, Trump signed an executive order designed to void them.

If you're a compliance officer trying to figure out which rules apply to your AI deployments — you're not alone.

The federal government wants preemption. States want enforcement. And you're stuck in the middle with a March deadline looming.

Here's what you need to know.

What California and Texas Just Passed

Two very different approaches. Both now enforceable.

---

California's TFAIA (Transparency in Frontier AI Act)

Who it affects: Developers of "frontier models" — AI systems trained using computing power exceeding 10^26 operations. In practice, this means OpenAI, Anthropic, Google, Meta, and a handful of others.

Revenue threshold: Only applies to developers with combined annual revenue exceeding $500 million.

What it requires:

- Create and publish a "Frontier AI Framework" addressing catastrophic risks

- Implement third-party audits and cybersecurity protections

- Report "critical safety incidents" (unauthorized model access, deceptive controls)

The compliance reality: If you're using frontier models (not building them), TFAIA doesn't directly regulate you. But it affects your vendors.

---

Texas's RAIGA (Responsible AI Governance Act)

Who it affects: Anyone developing or deploying AI in Texas or serving Texas residents. Much broader scope.

What it prohibits:

- AI that encourages self-harm, violence, or criminal activity

- AI that violates constitutional rights

- Unlawful discrimination against protected classes

- Creation of CSAM or deceptive deepfakes

Enforcement: Texas Attorney General can issue investigative demands and impose penalties.

The compliance reality: If you deploy AI that serves Texas customers, RAIGA applies to you.

The silver lining: There's an affirmative defense if you're using recognized risk management frameworks like NIST's AI RMF.

Trump's Executive Order: The Preemption Strategy

On December 11, 2025, Trump signed "Ensuring a National Policy Framework for Artificial Intelligence."

The goal: establish federal control over AI regulation and preempt state laws deemed inconsistent.

What the EO does:

1. Litigation task force. Directs the Attorney General to challenge state AI laws in court.

2. "Burdensome" law identification. Secretary of Commerce must identify state laws that impose "unnecessary burdens" on AI development by March 11, 2026.

3. Funding leverage. Conditions federal broadband funding on states avoiding "onerous" AI regulations. Also conditions discretionary grants on state non-enforcement during funding periods.

4. First Amendment angle. Targets laws requiring model output alterations or disclosures as potential First Amendment violations.

What's protected: Child safety provisions, data center infrastructure, and government procurement remain unaffected.

The compliance reality: If you're waiting for clarity before acting — the clarity might be "these rules don't apply anymore." Or it might be "they still do." Nobody knows.

The Compliance Chaos: What To Do Now

Here's the uncomfortable truth: you're operating in regulatory limbo.

State laws are enforceable today. Federal preemption might void them in months. Or it might not.

If you're building AI systems:

Don't wait for clarity. Build for the strictest interpretation.

- Document your AI development practices now

- Implement risk assessments aligned with NIST AI RMF (Texas gives you an affirmative defense)

- Create a "Frontier AI Framework" even if you're not required to

- Assume you'll need to prove your AI doesn't discriminate

If you're deploying AI in your enterprise:

Your vendor contracts just became more important.

- Ask vendors: "Are you compliant with California TFAIA?"

- Ask vendors: "Do you have documentation for Texas RAIGA defenses?"

- Get it in writing — regulatory liability flows downstream

If you serve Texas customers:

RAIGA is enforceable now. The EO hasn't voided it yet.

- Review your AI systems for prohibited uses

- Document your risk management approach

- Prepare for potential AG investigative demands

The March 11 deadline:

Commerce will publish its list of "burdensome" state laws by March 11. Until then, both state and federal positions are live.

Build for compliance. Document everything. Prepare to pivot.

NEW RESOURCE: 50 AI Prompts for Compliance Officers

I've compiled 50 copy-paste AI prompts specifically for compliance work:

• Inspection prep (simulated Q&A, document checklists)

• Policy drafting (gap analysis, regulatory alignment)

• Risk assessments (EWRA frameworks, control reviews)

• KYC/AML (UBO verification, EDD narratives)

• Board reporting (memos, regulatory responses)

These come from years on both sides—shaping policy with regulators, then running compliance at a licensed firm.

$29 with 7-day money-back guarantee.

Quick Hits

IBM at Davos: "Governance = Accountability"

IBM and e& (formerly Etisalat, UAE telecom giant) unveiled enterprise-grade agentic AI at Davos last week. Built on watsonx Orchestrate with 500+ customizable tools integrated with IBM OpenPages for governance.

The key quote from IBM's Ana Paula Assis: "Governance and accountability become just as important as intelligence as organizations embed AI into operations."

What this means for you: The hyperscalers are betting on governance as a feature, not a burden. If IBM is building governance into its agentic AI platform, that's the direction enterprise AI is heading.

---

Airia Launches AI Governance Product

On January 14, Airia launched its AI Governance capabilities — the third pillar of their enterprise AI ecosystem (joining Security and Agent Orchestration).

The product addresses EU AI Act, NIST AI Framework, and ISO 42001 compliance.

What this means for you: The vendor ecosystem is catching up. Tools for AI governance are shipping now — not "coming soon."

---

Market Growing 39% CAGR

The Enterprise AI Governance market hit $2.5B in 2025, projected to reach $68.2B by 2035. That's 39.4% annual growth.

What this means for you: This isn't a niche concern anymore. Governance is becoming a market category.

The Bottom Line

State AI laws are enforceable today. Federal preemption might void them tomorrow. Nobody knows which rules will survive.

But here's what we do know:

The trend is toward more regulation, not less. Even if Trump's EO succeeds in preempting state laws, it will likely be replaced by federal standards. The question isn't "will AI be regulated?" — it's "by whom?"

Documentation is your insurance. Whatever rules survive, you'll need to prove what your AI does and how you govern it. Start now.

NIST is your friend. Texas explicitly recognizes NIST AI RMF as an affirmative defense. California's TFAIA aligns with similar frameworks. Build on standards that multiple jurisdictions accept.

Vendor contracts matter more than ever. Regulatory liability flows downstream. Get compliance commitments in writing.

The chaos is real. But chaos favors the prepared.

Build for compliance. Document everything. Assume you'll be asked to explain it later.

P.S. The 50 AI prompts guide can help with documentation and risk assessments: bit.ly/ai-prompts-agi

Building AI governance frameworks for your organization? Let's talk: [email protected]

Questions? Reply to this email. I read everything.

Until next week — stay skeptical.

Anson

Keep reading