A Human Signal Publication
Signal Briefs & Analysis
Institutional AI failure analysis, governance frameworks, and written intelligence from Dr. Tuboise Floyd. Tied to the show. Grounded in evidence. No vendor capture.
humansignal.io/blog
Latest Post
Failure File™Air Canada Chatbot: When Your AI Invents Policy
Air Canada's chatbot promised a bereavement fare that didn't exist. A court held the airline liable anyway. Dr. Tuboise Floyd scores the governance collapse — GOVERN 1.1, GOVERN 1.7, MANAGE 1.1, MANAGE 4.1 — and the precedent that means you own what your AI says.
All Posts
Air Canada Chatbot: When Your AI Invents Policy
A chatbot hallucinated a bereavement refund policy. A court held the airline liable. The precedent: you own what your AI says. GOVERN 1.1 · GOVERN 1.7 · MANAGE 1.1 · MANAGE 4.1.
Apr 5, 2026 · Dr. Tuboise Floyd
Failure File™UnitedHealthcare AI Claim Denials: When the Algorithm Overrules the Doctor
nH Predict denied post-acute care claims at a 90% rate — overriding physician determinations. Federal lawsuits followed. GOVERN 1.1 · GOVERN 2.2 · MEASURE 2.5 · MEASURE 2.11.
Apr 5, 2026 · Dr. Tuboise Floyd
Failure File™Zillow iBuying Collapse: $881M in Losses and a MAP Control That Was Never Built
The Zestimate was a consumer estimation tool. Zillow used it to buy houses. $881M in write-downs later, the company shut the program down. MAP 1.5 · MAP 5.2 · MEASURE 2.5 · MEASURE 4.1.
Apr 5, 2026 · Dr. Tuboise Floyd
Failure File™The Anthropic Exodus and Governance Collapse
What the resignation of Anthropic's head of safeguards research tells us about governance collapse under capital pressure — and what the L.E.A.C. Protocol™ reveals about why it will happen again.
Apr 5, 2026 · Dr. Tuboise Floyd
Interview · 50 minAI Governance: Balancing Innovation With Risk Management
Col. Kathy Swacina and Taiye Lambo on the death spiral of ungoverned AI, the TAIMScore™ Top 20 controls, PACE planning, and why intelligence is abundant but trust is scarce.
Apr 5, 2026 · Col. Kathy Swacina & Taiye Lambo, HISPI
Interview · 51 minDigital Accessibility in the AI Era: Making It Actually Work
Dr. Michele A. Williams on ableism, the disability dongle, why AI encodes inaccessibility at scale, and a 90-day leadership commitment that actually sticks.
Apr 5, 2026 · Dr. Michele A. Williams
Forum · LiveAI Governance Open Forum: Never Blindly Trust — Always Verify
Full transcript and analysis from the Georgetown University forum. AI literacy, deepfakes, honest human oversight, and what students entering the workforce need to know right now.
Apr 5, 2026 · Taiye Lambo, HISPI
NIST AI RMFNIST AI RMF GOVERN Function Explained: What It Actually Requires
GOVERN is the foundational layer of NIST AI RMF — but most institutions treat it as a checkbox. Dr. Floyd breaks down what GOVERN actually requires at the practitioner level, and where structural gaps appear.
Apr 6, 2026 · Dr. Tuboise Floyd
NIST AI RMFHow to Operationalize NIST AI RMF: A Practitioner's Guide for Institutional Operators
Adopting NIST AI RMF and operationalizing it are different acts. Dr. Floyd maps the five practitioner steps — from GASP™ diagnostic to TAIMScore™ assessment — that move governance from policy to structure.
Apr 6, 2026 · Dr. Tuboise Floyd
NIST AI RMFTAIMScore™ vs. NIST AI RMF: What Each Framework Does and Doesn't Do
NIST AI RMF is the mandate. TAIMScore™ is the mechanism. Dr. Floyd maps the relationship, the direct domain alignment, and how institutional operators use both to build audit-ready AI governance.
Apr 6, 2026 · Dr. Tuboise Floyd
NIST AI RMFNIST AI RMF for Small Organizations: What Scales and What Doesn't
NIST AI RMF assumes enterprise scale. Most institutions don't have it. Dr. Floyd maps the structural minimums that apply at any size, what can be proportionate, and what the minimum viable governance structure looks like.
Apr 6, 2026 · Dr. Tuboise Floyd
Never Miss a Briefing
Subscribe to Human Signal
New episodes every month. Independent analysis. No vendor capture. Just signal.