Founder · Principal Analyst
Human Signal · Independent AI Media
I am the founder of Human Signal — an independent AI governance research and media platform for leaders inside AI-disrupted institutions: federal agencies, universities, and enterprises racing to deploy autonomous AI systems without the governance infrastructure to keep those systems from breaking the institution.
I reverse-engineer institutional failures, build frameworks that operators can actually use, and document what happens when organizations treat AI as a procurement problem instead of a systems design problem.
My work bridges the gap between deep technical systems design and operational reality — ensuring operators have the clear signal they need to navigate AI safety and governance.
Independent Research
Human Signal operates without vendor funding, advertising, or institutional capture. The research remains independent because readers, listeners, and institutional partners choose to sustain it. There are three ways to support this work.
Individual
Make a one-time or recurring contribution directly to the research. Every amount sustains the independence of this analysis.
Contribute →Institutional
Organizations that share Human Signal's commitment to responsible AI can partner as named underwriters — with full editorial independence preserved.
See Tiers →Grants
Foundations and public interest organizations seeking to support independent AI governance research are encouraged to reach out directly.
Get in Touch →Frameworks
My doctorate in Adult Education is the intellectual engine behind everything I build. For fifteen years I have studied how institutions learn, resist, and break under structural pressure. Adults do not learn governance from documentation. They learn it from failure. That insight drives every framework, every case, every publication under Human Signal.
Analysis
Two levels of institutional AI governance failure. Structural absence. Structural insufficiency. Permitted is not the same as admissible.
Read the framework →Diagnostic
Governance As a Structural Problem. Most institutions do not have a governance problem because they lack the right software. They have a governance problem because they never built the right structure.
Read the diagnostic →Thesis
Most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.
Read the framework →Practice
Cognitive defense for operators drowning in vendor hype. A structured practice for cutting through artificial noise and protecting institutional judgment.
Learn the practice →Framework
Four physical constraints every AI strategy must address: Lithography, Energy, Arbitrage, Cooling. If your strategy does not address all four, you are leaking value.
Read the protocol →Architecture
Presence Signaling Architecture and AI as Presence Interface — frameworks for restoring human visibility in systems designed to observe, not listen.
Read the architecture →Experience
My career has been split between fixing systems under pressure and studying why they break.
Federal Operations
Technical strategy and program management supporting federal IT modernization — where outages and bad data have real-world consequences.
Enterprise Resilience
Led disaster recovery, COOP design, and large-scale systems migrations — 7,000+ users, cross-functional governance failure recovery.
Systems Research
Doctoral research on how institutions adapt to — or reject — structural controls, so governance becomes something people actually follow instead of route around.
Now
I am building Human Signal as the premier independent media and educational platform for AI governance — providing documented institutional failures, original frameworks, and honest analysis for the people who have to make decisions inside systems they did not design.
Through corporate underwriting I partner with responsible AI startup founders and compliance officers. This public broadcasting model allows builders to fund independent research while securing visibility with the 320,000+ tech professionals Dr. Floyd engages across his network — without bending the analysis.
Building Season 2 of Human Signal and developing visual strategy playbooks for institutional operators. Open to corporate underwriting, advisory roles, and speaking engagements on AI governance, institutional resilience, and systems design.
Capabilities
Direct the production of the Human Signal podcast and The Failure Files™ video series — converting complex AI governance topics into accessible, independent research.
Design and execute corporate underwriting and sponsorship packages for responsible AI startup founders and enterprise risk leaders — securing visibility with the 320,000+ tech professionals Dr. Floyd engages across his network.
Translate emerging AI regulations and federal guidance into operational strategies for leaders navigating AI-disrupted institutions.
Provide strategic consulting on AI infrastructure viability leveraging proprietary frameworks — The LEAC Protocol and the Role Signal Analyzer.
Built a reusable AI governance playbook mapping NIST 800-53 and FedRAMP readiness controls to checkpoints in AI-augmented workflows — guiding institutional operators and sponsors on compliance positioning.
Designed and enforced Hyperprompt — a context control protocol for LLM-enabled professional workflows reducing hallucination risk for knowledge workers.