Why Institutions Will Not Learn to Govern AI Until We Teach the Way Adults Actually Learn
“Most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.” — Dr. Tuboise Floyd, PhD · Human Signal
The AI governance field is producing frameworks at scale. Policies, ethics boards, compliance checklists, certification programs — the documentation infrastructure is proliferating. Institutions are not failing because they lack governance documents. They are failing because they do not know how to internalize governance at the point of execution.
This paper introduces AI governance pedagogy as a field-level diagnostic — arguing that the failure to teach governance the way adults actually learn is the unnamed structural flaw beneath every framework the field has produced. Solving it requires applying the same theoretical foundation that Malcolm Knowles applied to adult education in the 1970s. Adults do not learn through documentation. They learn through experience, problem-centered engagement, and relevance to challenges they are already inside.
The position draws on a 2010 doctoral dissertation examining why Georgia workforce educators failed to practice the learner-centered philosophies they professed, a complete IP architecture developed across fifteen years of applied practice, and a practitioner-facing media platform that has operationalized these principles at institutional scale. The argument is structural, not theoretical: AI governance fails at the pedagogy layer, and the field lacks a founding theorist to name that layer and build the scaffold around it.
That gap is the opportunity. And this paper is the stake in the ground.
Walk into any enterprise AI governance rollout and you will find the same artifacts: a policy document no one reads past the second page, a training module that runs forty-five minutes and earns a completion badge, an ethics board that meets quarterly and produces a report that goes into a shared drive. The infrastructure is real. The learning is not.
What the field has produced, at scale, is a governance education apparatus that treats adults as passive recipients of compliance information and then registers surprise when the information does not hold under operational pressure. That failure is not technological and not attributable to weak leadership in any conventional application. The pedagogy layer has gone unnamed because the field has been hyperfocused on what institutions should know instead of asking how institutions actually learn.
The dominant approach to AI governance education operates from two flawed assumptions: that knowledge transfer equals behavioral change, and that adults learn governance the same way they learn software — through instruction, demonstration, and certification. Fifty years of empirical research in adult learning theory says otherwise. Effective governance education must be designed around the needs, goals, and specific use cases of the practitioner — not the compliance calendar of the institution. The purpose of the teaching-learning transaction is to elicit change in the learner. If the design does not produce that change, it is not education.
It is documentation with a deadline.
Malcolm Knowles identified the core distinction in 1970: adult learners are self-directed and self-motivated, conceptualizing learning in terms of freedom and autonomy, cooperation and participation. They bring prior experience to every learning encounter. They are internally motivated, problem-centered, and require immediate relevance to what they are being asked to learn. Adults have a deep psychological need to be self-directing — and that need does not suspend itself because an institution has scheduled a training module.
The model of pedagogy derives from the Greek words paid (meaning “child”) and agogus (meaning “leading”). Pedagogy is literally the art and science of teaching children — a process built on the principle that education is the transmittal of known knowledge and skills to a dependent learner. From a pedagogical view, the concept of the learner, their experiences, readiness, and orientation to learning, according to Knowles (1980), operates from a fundamentally different premise:
…is a dependent one and the teacher is expected by society to take full responsibility for determining what is to be learned, when it is to be learned, how it is to be learned, and if it has been learned — the experience learners bring to a learning situation is of little value.
Andragogy — the art and science of helping adults learn — assumes the opposite. Not an empty vessel waiting to be filled, but a whole person who will only absorb what connects to lived experience, prior knowledge, and the problems they are already trying to solve.
The AI governance field is applying the pedagogical model to an andragogical problem. The result is governance documentation that institutions cannot internalize, cannot operationalize, and cannot execute under pressure — because pressure is precisely where documentation becomes useless and structural learning becomes essential.
In 2010, this author completed a doctoral dissertation at Auburn University examining the adult educational philosophies and teaching styles of workforce educators and entrepreneurship instructors within the State of Georgia. The study employed two validated instrumentation frameworks: the Principles of Adult Learning Scale (PALS), developed by Gary Conti, which measures the frequency with which an instructor practices one teaching style over another along a learner-centered to teacher-centered continuum; and the Philosophy of Adult Education Inventory (PAEI), developed by Lorraine Zinn, which identifies the underlying philosophical orientation — progressive, behaviorist, humanist, liberal, or radical — governing an educator’s approach to the teaching-learning transaction.
Sixty-two surveys were returned from each population. Reliability coefficients registered Cronbach’s alpha of .99 for both instruments. Mean scores on the PAEI trended higher on the progressive and behaviorist orientations, with participants reporting no strong disagreement across all five educational philosophies — a pattern consistent with existing literature suggesting that instructors may not be aware of inconsistencies within their own beliefs absent deliberate philosophical self-examination.
The central finding was unambiguous: total mean scores on the PALS fell below the mean established by Conti (2004), indicating that participants tended toward teacher-centered rather than learner-centered practice. Entrepreneurship instructors scored higher than workforce educators across all teaching style factors but neither population was practicing at the learner-centered register their stated philosophies implied. They professed learner-centered beliefs but their instructional practice did not reflect.
The gap between philosophical orientation and classroom execution was not incidental. It was structural. The institution, the delivery context, and the default assumptions embedded in professional practice were overriding the very philosophies these educators held.
That finding revealed a pattern. A two-level structural gap in absence and insufficiency.
What the dissertation documented was structural insufficiency in pedagogy: a system in which the governance framework existed but could not intervene at the moment of execution. The educator had the right beliefs. The structural conditions overrode those beliefs in practice.
This is precisely the pattern that AI governance failures reveal. UnitedHealthcare maintained insurance contracts that explicitly stipulated coverage decisions would be made by clinical staff. The nH Predict algorithm — deployed through its NaviHealth subsidiary and documented to carry a 90% error rate on appeals — operated systematically outside that contractual commitment. The governance framework named the standard. The algorithm never encountered it.
Air Canada’s terms of service prohibited retroactive bereavement fare applications. Its customer-facing chatbot promised the opposite — advising a grieving passenger that he could purchase a full-fare ticket and apply for the bereavement discount within ninety days of travel. The Tribunal called Air Canada’s defense “remarkable” and rejected it. The policy existed. The system it governed did not know the policy existed.
Zillow’s collapse is the most instructive case because the governance failure was not passive — it was enforced. Under Project Ketchup, Zillow’s leadership explicitly prevented its pricing experts from modifying the algorithm’s home valuations and directed them to stop questioning its outputs. Human override was not merely unavailable. It was prohibited. The algorithm was not ungoverned. It was protected from governance.
In 2010, a doctoral dissertation examining Georgia workforce educators found the same structural condition operating in a different domain: practitioners who held the right beliefs, inside institutions whose structural conditions prevented those beliefs from reaching the point of execution. The field was not named yet. The pattern was already present.
The Trust Gap framework formalizes the dissertation’s central finding into two diagnostic levels. Not as a typology of bad actors, but as a structural map of how governance fails in organizations that believe they are governing.
No governance framework exists. The institution has no documented protocol, no escalation path, no accountability structure for AI decision-making. The structure did not fail; it was never built. The Amazon warehouse fulfillment algorithm that systematically scheduled workers at injury-producing pace operated inside an institution with no AI governance architecture capable of asking whether the optimization target was the right variable. There was no framework to fail because there was no framework to begin with.
Governance exists. Policy has been written. Ethics boards have convened. And still, the algorithm runs without encountering any of it. UnitedHealthcare had coverage standards — the nH Predict algorithm processed denials at a scale and speed those standards could not reach. Air Canada had a bereavement policy — the chatbot never consulted it. Zillow had pricing experts — Project Ketchup made their judgment structurally irrelevant.
Permitted is not the same as admissible. That distinction, borrowed from the language of evidence and proof, is the precise diagnostic the field has been missing. A governance framework that permits a decision without requiring that decision to pass through an accountability structure has not governed anything. It has documented an intention. Documentation is not governance. It is the precondition for governance that was never completed.
The field is not missing frameworks. It is missing the structural conditions that make frameworks executable. And those structural conditions are built through learning — specifically, through the kind of learning that adults actually do: experience-centered, problem-grounded, and failure-forward.
Malcolm Knowles did not produce a theory and leave it to the field to operationalize. He built the field itself — constructing, simultaneously, the theoretical architecture that named a real and previously unarticulated phenomenon, the practitioner vocabulary that provided adult education with a shared analytical language, and the instructional system through which that vocabulary could be applied at the point of delivery. The durability of andragogy as a school of thought rests in part on a finding that runs counter to the evaluative logic embedded in most institutional training programs: when adult learners perceive the outcome of a learning experience as re-diagnosis rather than evaluation, they enter the learning activity with greater enthusiasm and engage with it as a constructive rather than corrective process.
That distinction — between being assessed and being equipped — is foundational to andragogy itself. It produces a workforce that no longer understands its primary role as that of the full-time learner, but as one of increasing competence through doing: practitioners who are producers first and students by design. Their prior life experience is not contextual background. It is the force for learning. Andragogy, at its core, is a process in which individuals take initiative in designing their own learning experiences, grounded in the experiential capital they have already accumulated and are actively deploying in pursuit of greater competence and professional standing. Translated into the governance domain, that process does not generalize. It operates case by case — each incident a situated learning encounter, each failure a diagnostic instrument, each practitioner bringing the irreducible specificity of their own institutional context to bear on the analysis.
Knowles positioned andragogy as the necessary corrective to pedagogy’s misapplication in adult learning contexts — not as a competing theory, but as a more accurate account of how adult cognition actually operates. The question he answered was deceptively simple: how do adults actually learn?
The parallel question for AI governance is equally deceptive in its simplicity: how do institutions actually internalize governance?
The answer follows the same logic. Not through documentation, which addresses the artifact of governance without producing the behavior. Not through compliance training, which satisfies the regulatory requirement without building the structural internalization that holds under operational pressure. Through failure cases — real incidents, structurally analyzed at sufficient depth that practitioners can enter them as lived experience by proxy, apply their own institutional context to the diagnostic, and develop the governance judgment that no policy document has ever produced on its own.
This is not a novel proposition. It is Knowles applied with precision to a domain that has constructed an entire education apparatus on the pedagogical assumptions he spent his career dismantling.
Knowles identified six principles of andragogy. Three bear directly on the AI governance pedagogy problem — and each one indicts the compliance model on its own terms.
Adults will not engage with learning content until they understand why that content is relevant to a problem they are already inside. Compliance documentation answers that question with the weakest possible justification: because the regulator requires it. The Failure Files™ answer it with structural precision: because this institution — in this sector, with this governance architecture — failed in exactly the way your institution is currently structured to fail. Relevance is not asserted. It is demonstrated.
Adult cognition organizes itself around problems, not subjects. Abstract AI ethics instruction, disconnected from the specific operational contexts in which governance decisions are actually made, does not activate the cognitive processing that produces durable learning. A structurally dissected case study of an AI system denying life-sustaining healthcare claims at scale — with the governance control analysis mapped directly to the learner’s institutional architecture — does. The problem is not simulated. It is real, documented, and consequential.
Adults do not arrive at learning encounters as empty vessels. They arrive as practitioners carrying institutional knowledge, professional judgment, and direct experience with governance structures that have and have not held. The Failure Files™ methodology is engineered to activate that prior knowledge rather than bypass it — requiring learners to apply their own institutional context to each case, situating the diagnostic work inside the professional reality the learner already occupies. The analysis cannot be completed at a distance. It demands the practitioner’s presence.
Knowles had one framework and built a field. What follows documents an integrated system of seven frameworks, a practitioner media platform, a case library, a credentialing infrastructure, and a public accountability mechanism — all constructed from the same theoretical foundation Knowles established and the dissertation confirmed. The scaffold is not theoretical. It is operational.
A school of thought requires more than a framework. It requires a complete intellectual architecture: a founding diagnosis, a theoretical foundation, a practitioner vocabulary, a pedagogical instrument, and a delivery infrastructure. The following IP architecture constitutes that complete system for AI governance pedagogy.
| Framework | Function |
|---|---|
| Workflow Thesis | Names the institutional failure mode. Governance structure — not model performance — is the unit of risk. |
| Trust Gap (v3) | Two levels: Structural Absence and Structural Insufficiency. Permitted is not the same as admissible. |
| GASP™ | Governance As a Structural Problem. Three diagnostic questions: who owns the decision, what is the escalation path, what accountability exists without a vendor. |
| Noise Discipline | Cognitive defense against vendor capture and hype. Protects practitioner judgment during governance design. |
| L.E.A.C. Protocol™ | Four physical AI infrastructure constraints: Lithography, Energy, Arbitrage, Cooling. If your AI strategy does not address all four, you are leaking value. |
| PSA® / AIaPI™ | Presence Signaling Architecture. Identity-coded signal as the primary interface with algorithmic systems. Human presence as infrastructure, not performance. |
| Failure Files™ | Pedagogical instrument. Real AI governance failure cases structured through TAIMScore™ domains. Adults learn governance through failure by proxy, not compliance documentation. |
The frameworks are not independent tools. They constitute a diagnostic and pedagogical sequence:
This is not a collection of frameworks. It is a curriculum with a founding theory.
The Failure Files™ are the pedagogical instrument that closes the gap the dissertation identified. A case library designed from the ground up to activate the conditions under which adults actually learn governance.
Adults do not learn governance from documentation because documentation is abstract. They learn it from failure because failure is concrete, consequential, and structurally analyzable. When an adult learner steps inside a real AI governance failure — reads the incident summary, applies the governance control analysis, maps the TAIMScore™ diagnostic domains, and extracts the structural lessons — they are not receiving information. They are experiencing governance failure by proxy.
Proxy experience is the mechanism. It activates the same cognitive processing that real experience does, without requiring the learner’s institution to absorb the actual cost of the failure.
Air Canada’s AI chatbot provided a passenger with bereavement fare information that contradicted the airline’s actual policy. When the passenger sought the fare, Air Canada argued before the British Columbia Civil Resolution Tribunal that its chatbot was a separate legal entity responsible for its own actions. The Tribunal called that submission “remarkable” and rejected it. The governance failure: no accountability structure existed for AI-generated customer commitments.
The chatbot was permitted to make representations. Those representations were not admissible as policy. Permitted is not the same as admissible.
UnitedHealthcare deployed the nH Predict algorithm through its NaviHealth subsidiary to process post-acute care claims. The model carried a documented 90% error rate on appeals — meaning nine of ten denied claims that were challenged were ultimately reversed. The governance failure: the AI was permitted to make decisions at scale without an escalation structure capable of intervening at the point of execution before harm occurred.
The governance framework existed. It could not reach the algorithm.
Zillow’s Offers program produced $304 million in Q3 2021 losses before shutdown, with total write-downs exceeding $500 million. Under Project Ketchup, Zillow leadership explicitly prevented pricing experts from modifying the algorithm’s valuations and directed them to stop questioning its outputs. The governance failure was not passive.
Human override was prohibited by institutional design. The model was not ungoverned. It was protected from governance.
Each case is not a cautionary tale. It is a governance curriculum. The practitioner who works through all twelve cases has not read about AI governance. They have practiced it — by proxy, with structure, in the mode that adults actually learn.
The claim being made in this paper is not modest. It is stated plainly:
The AI governance field needs a Malcolm Knowles. The role is available. The theoretical foundation, the IP architecture, the practitioner platform, and the institutional track record exist to fill it.
Knowles answered: how do adults learn? He built andragogy as the answer. The AI governance field has produced frameworks, regulations, certifications, and ethics boards. It has not produced an answer to the question that determines whether any of those things actually work: how do institutions learn to govern?
The answer is andragogical — failure-case-first, experience-centered, problem-oriented, structurally grounded. It is Knowles applied to the most consequential governance challenge institutions have faced since the financial crisis.
The window is not indefinite. Institutional AI governance programs are beginning to develop internal pedagogies. The question is not whether someone will eventually name the pedagogy problem in AI governance. The question is whether the founding theorist position is claimed before the field calcifies around inferior assumptions.
The following comparison situates the andragogical model against the two currently dominant approaches. Neither dominant approach is wrong. Both are insufficient at the layer that determines whether governance actually executes.
| Approach | Foundation | Pedagogy | Success Metric | Driver |
|---|---|---|---|---|
| Compliance Model | Policy documentation | Delivery / presentation | Audit trail | Regulatory mandate |
| Technical Model | AI safety research | Model alignment | System performance | Engineering standards |
| Andragogical Model (Floyd) | Adult learning theory | Failure-case-first learning | Structural internalization | Institutional readiness |
The compliance model produces documentation that satisfies regulators. The technical model produces systems that perform within safety bounds. Neither model addresses whether the human practitioners inside institutions can actually execute governance decisions under operational pressure.
The andragogical model operates at the layer beneath both — the learning infrastructure that determines whether compliance documentation and technical standards translate into actual governance behavior. Without it, the other two models produce artifacts. With it, they produce capability.
Every institution currently operating an AI governance program should ask: where is our learning infrastructure? Not our policy infrastructure. Our learning infrastructure. The answer determines how close the institution is to a Trust Gap failure — and at which level.
Compliance training is not a learning infrastructure. A forty-five minute module does not produce the kind of structural internalization that holds under operational pressure. Institutions need curriculum — failure-case-first, experience-centered, structurally grounded — that builds the muscle memory before the pressure arrives.
The field has produced frameworks. It has not produced the language for why those frameworks fail to stick. The pedagogy problem gives practitioners a diagnostic frame: when governance fails at execution, the first question is not what policy was missing. The question is what learning was missing.
Practitioners who can diagnose governance failure at the pedagogy layer — not just the policy layer — are operating at a level the field has not yet named. That is both a competitive advantage and a professional responsibility.
AI governance is not yet a mature discipline. It is a set of practices in search of a theoretical foundation. A field without a founding theorist at the pedagogy layer will continue producing documentation that institutions cannot internalize.
The pedagogy problem will not solve itself. Regulation will not solve it. Technical alignment will not solve it. The claim being made here is that the theoretical work is done, the scaffold is built, and the delivery infrastructure is operational. What remains is for the field to receive it.
If your governance training produces completion rates but not behavioral change — if your practitioners can pass the assessment but cannot execute under pressure — the problem is not the content. It is the pedagogy. You are teaching adults like children and calling it governance education.
A 2010 dissertation found that Georgia workforce educators believed in learner-centered practice and could not execute it. The gap between professed philosophy and actual delivery was the finding. Fifteen years later, that finding describes the dominant AI governance failure mode: institutions with governance frameworks that cannot intervene at the point of execution.
The Trust Gap named it. GASP™ diagnosed it. The Failure Files™ built the pedagogical instrument to close it. The PSA® architecture grounded it in presence and identity. The media platform operationalized it for a practitioner audience. The SSRN preprint staked the scholarly claim. The USCO registration protected the IP.
This is not a framework. It is a school of thought. And schools of thought require founding theorists who are willing to name what the field has not yet named, build what the field has not yet built, and deliver it to practitioners who are already inside the problem.
The AI governance field needs a Malcolm Knowles. The theoretical foundation is laid. The scaffold is built. The delivery infrastructure is operational.
Enter your email to access the formatted PDF — including tables, framework references, and full citations.
→ View SSRN Preprint