Stop Banning Shadow AI. Start Managing It.
A practical AI governance framework that went from concept to working policy in 30 days - not 18 months.
"We definitely don't want to be left behind, but we definitely don't want to be the first one to get sued."— That's paralysis, not strategy.
Born from an AI4 roundtable with 40 security leaders from organizations like Capital One, Edwards Air Force Base, and the IMF, all facing the same crisis: employees using unapproved AI tools despite bans.
Who this is for:
- • CISOs and security leaders stuck between AI bans and unmanaged experimentation
- • CIOs who need governance that matches how people actually use AI
- • Governance and risk teams under pressure from boards and regulators
The Shadow AI Crisis
I was supposed to talk about AI security controls at an AI4 roundtable. That lasted about ten minutes. Then someone asked: "How do you handle shadow AI when your executives are already using it?"
Helicopter parents create kids with developmental deficits. Security teams that ban everything create shadow IT. The more you restrict, the more people find workarounds. You're not preventing risk - you're just losing visibility into it.
Here's what research shows:
of employees use unapproved AI tools
higher breach costs with shadow AI
won't stop even if banned
lack basic AI access controls
Trying to govern AI without using it yourself is like setting internet strategy in the late '90s without ever having used a web browser or email. The Framework of No isn't working. Bans create shadow usage. 18-month governance projects miss the window.
The Sunlight AI Framework
Reverse the default from "No until approved" to "Yes unless red-flagged."
Think of it like a parent at a playground: you're not following your kid around dictating every move. You're sitting on the bench, watching, ready to intervene if they climb something dangerous. Default to yes. Get involved when something actually goes wrong.
Green: Go Play
Public information, external websites, marketing materials. No corporate secrets. Proceed immediately.
Research shows 72.6% of data used with AI is non-sensitive. That's your green-light territory.
Yellow: Put The Pads On First
Internal presentations without PII or financials. Quick security review: What data? What outcome?
The question isn't "is this allowed?" It's "if this leaks, is it a bad Monday or a lawsuit?"
Red: Prove It Can't Be Done
HIPAA, GDPR, customer financials, trade secrets. Locked down, but security still has to get to yes.
Use the Reddit Test: If this appeared on Reddit tomorrow, would it be embarrassing, damaging, or catastrophic?
Real-World Success
Oxford University
Launched an AI Ambassador programme with 70+ colleagues across divisions. Not a central committee reviewing requests - embedded accountability partners answering practical questions and helping teams use AI tools safely.
That's Sunlight AI: gym buddies in the villages, not a fortress on the hill.
Insurance Company
9 weeks of workshops with 60 HR staff. Hit 60% adoption saturation. Started with yellow-zone work: internal, not regulated, high productivity gain. Built 5 custom tools internally.
They called it the "AI evangelist" approach - and it worked.
Goldman Sachs
Firm-wide AI rollout. Treating AI like every other enterprise tool: sanctioned, monitored, supported - not banned.
The chemicals under the sink are more dangerous than letting your kid play in an open field.
What To Do Monday Morning
Working governance in 30 days, not 18 months
- Measure your shadow AI baseline (SaaS logs for ChatGPT, Gemini, Copilot)
- Classify your top ten data types into green, yellow, red
- Name one accountability partner per business unit
- Ship a two-question form: "What data?" and "What outcome?"
- Pick the team already using AI and run a four-week pilot
Get the Full Framework
Read the complete Sunlight AI article with detailed case studies, implementation checklists, and the Reddit Test template.
Read on Substack