The AI Governance Briefing
A Human Signal Production
AI governance briefings, institutional failure case studies, and asymmetric strategy for operators navigating disrupted institutions. New episodes every month.
Failure Files™
When Your GPS Happily Drives You Into The Sea
A GPS system confidently routes a driver into a flooded road. Nobody overrode it. This is what ungoverned automation looks like in the physical world — not a chatbot hallucination, but a navigation system with no human override layer routing someone into danger. TAIMScore™ failure analysis: Safety, Trust, and Responsibility controls all fail simultaneously.
Mar 6, 2026 · 2 min
The Anthropic Exodus and Governance Collapse
When a safety leader resigns from one of the most prominent AI labs in the world, that is not a personnel event — it is a governance signal. This episode forensically examines what happens when billion-dollar infrastructure commitments collide with safety protocols, and what institutional operators should read into the pattern.
Feb 20, 2026 · 6 min
Interviews
Making Digital Accessibility Work In The AI Era
Dr. Michele A. Williams joins Dr. Floyd to examine why 97% of the web still presents barriers to disabled users — and why AI is making the problem worse, not better. Accessibility is not an edge case. It is a legal risk, a design failure, and a governance decision baked into every screen your institution ships.
Mar 2, 2026 · Dr. Michele A. Williams
AI Governance: Balancing Innovation With Risk Management
Col. Kathy Swacina and Taiye Lambo of HISPI join Dr. Floyd to discuss Project Cerebellum, holistic AI control layers, and what governance frameworks look like when they actually work in high-stakes institutional environments. Not compliance theater — operational architecture.
Feb 12, 2026 · Col. Swacina + Taiye Lambo
Full Episodes
The Governance Gap: Why AI Contracts Outpace Control Systems
Leadership is signing AI contracts faster than institutions are building the control systems to govern them. That gap — between procurement velocity and governance capacity — is where the lawsuits, the scandals, and the quiet institutional failures live. Dr. Floyd maps the anatomy of the gap and what closing it actually requires.
Feb 14, 2026
A Generational Truce: Gen X and Gen Z in 2026
A quiet but powerful alliance is forming between Gen X and Gen Z — the OG builders and the digital natives — who are realizing they have been fighting the same battle against systems designed to fail them. Gen X brings scars and systems wisdom. Gen Z brings curiosity and the creative refusal to accept old rules. AI is the equalizer: the same tool the gatekeepers tried to control is now open-source retribution. This is the great generational reset.
Nov 13, 2025
Signal Briefs
Digital Accessibility In An AI World
Accessibility is a fundamental human right — and most institutions are failing it at the architecture level. This one-minute brief sets up the full interview with Dr. Michele A. Williams. What Gen X leaders building AI systems need to understand before the technology compounds what the design already got wrong.
Feb 23, 2026
AI Activism For Insiders: This Is Not Ethics Work
Human Signal monitors the gap between stated safety commitments and operational reality. When that gap widens, it is not an ethics problem — it is a systems failure. This brief reframes what institutional operators owe their organizations when the machine starts drifting from its stated mission.
Feb 20, 2026
Noise Discipline: Social Media Destroys Strategic Focus
Attention is the scarcest resource in an AI-disrupted institution. Social media feeds are not neutral — they are architecturally designed to fragment strategic thought. Dr. Floyd introduces the Noise Discipline Framework: treat feeds like radiation zones, consume in controlled doses, and reclaim your attention as a competitive asset.
Feb 6, 2026
Agentic AI Meets Aging Infrastructure
Autonomous AI systems are being deployed into infrastructure built in the 1970s. The collision is not theoretical — it is already happening. Dr. Floyd maps three specific failure points where agentic AI decision-making breaks against legacy systems, and why institutions racing to deploy are racing toward a breaking point they have not modeled.
Feb 5, 2026
AI Ethics & Intergenerational Justice
The governance decisions being made today about AI deployment will be paid for by people who are currently in middle school. Ungoverned AI does not just create present-day risk — it encodes liability across generations. Dr. Floyd examines the intergenerational dimension of institutional AI governance failure and what accountability actually requires.
Feb 4, 2026
Beyond AI: Quantum Computing and Organoid Intelligence
AI was the warning shot. The next wave — organoid intelligence, quantum computing, and the energy infrastructure race underpinning all of it — is already in motion. Who controls the physical layer of these technologies determines who shapes the institutions built on top of them. The L.E.A.C. constraints do not stop at AI.
Feb 3, 2026
Never Miss a Briefing
Subscribe to Human Signal
New episodes every month. Independent analysis. No vendor capture. Just signal.