The Shadow Side of AI Governance: Embracing Our Corporate Jungian Nightmare
We carry our past with us—the primitive and imperfect corporate structures with their desires for control and orderly spreadsheets—and it is only with enormous effort that we detach ourselves from this burden. The “Enterprise AI Governance Manifesto” now making its rounds among Fortune 500 companies is precisely this burden made manifest, a well-intentioned but ultimately misguided attempt to impose order on chaos. Like Jung wrestling with the Book of Job, I find myself wrestling with this manifesto’s theological certainty that governance will save us from ourselves.
The Collective Corporate Shadow
The manifesto’s insistence on a “clear North Star that unifies AI efforts” reflects our collective corporate shadow—that unconscious desire to control what inherently cannot be controlled. It’s as if we’re trying to build a perfect org chart for a hurricane.
Picture the scene: A committee of executives sits in a gleaming conference room, meticulously crafting principles for a technology that will have evolved three times before their meeting notes are transcribed. “We need alignment,” they insist, as if innovation ever followed a straight line. This is our corporate shadow at work—the denial of complexity in favor of comforting illusions of control.
I assert that if you have not mindfully engaged with the chaotic nature of technological evolution, you may be governing needlessly.
Our Issues Are In Our Tissues (And Our Policies)
The manifesto frames “scattered AI experiments” as a problem requiring centralized coordination. Yet just as the individual shadow contains repressed parts of ourselves often born of trauma, our collective corporate shadow contains a deep fear of the messy process through which innovation actually happens.
The history of technology is not a history of strategic alignment—it’s a history of beautiful accidents, rogue experiments, and unexpected applications. The iPhone wasn’t born from a governance framework. Netflix didn’t pivot to streaming because a committee approved the strategic North Star.
Does that make you uncomfortable? Good. Shadow work should be uncomfortable.
Well-Adapted for Governance, Poorly Adapted for Innovation
If we accept that human organizations evolved culturally from early hierarchical structures, and that the oft-mischaracterized “control is safety” goad had something to do with that evolutionary path, then it follows that we are well-adapted for governance but poorly adapted for innovation.
What does it mean to be well-adapted for governance? It seems to me that the essential trait we must possess is the innate ability to easily dehumanize the creative, unpredictable parts of our organizations. In order to literally force compliance, quash experimentation, and enslave creativity, we must have the capacity to view innovation as dangerous. Natural selection in corporations would have long ago weeded out any team that held empathy for the rebels.
Thank you, Frederick Taylor and Henry Ford, for without that very useful dehumanizing management science, organizational creativity would be much more pervasive and far less profitable for consultants selling governance frameworks.
The Accountability Illusion: Taming the Collective Umbra
The manifesto proposes an “accountability framework” with its neat hierarchy from board to individual contributors. This is like trying to assign specific blame for a thunderstorm. When your AI hallucinated content that offended a major client, does it help to know which committee approved it? When your recommendation algorithm develops unexpected biases, will your “clear decision rights” protect you?
One major tech company a colleague of mine worked with spent nine months developing a comprehensive AI governance framework. During that same period, three of their competitors launched experimental AI products, learned from real market feedback, and pivoted twice. By the time the governance framework was approved, it was addressing yesterday’s risks while completely missing today’s opportunities.
Is prejudice based on identity real in AI? Of course! Are we collectively biased in our data and our algorithms? Yes. I may not be. In fact, I insist that I am not. You may not be. You may also insist that you are not. But collectively, yes, we are. Rather than take steps to scrub those undesirable characteristics through governance theater, might we also make good use of those unruly experiments that more truthfully reflect our innovative potential?
The Shadow AI Underground: Our Digital Jungian Bogeymen
The manifesto warns ominously about “shadow AI” as a failure state to be avoided. This fundamentally mischaracterizes what’s happening. When employees bypass official channels to experiment with AI, they aren’t being rebellious—they’re being human. They’re responding to real business needs that your governance framework is too slow to address.
Consider the case of JP Morgan’s COIN (Contract Intelligence) platform, which revolutionized how the bank reviews legal documents. This system now processes 12,000 commercial annual credit agreements in seconds instead of the 360,000 manual hours it previously required. What’s rarely mentioned is that COIN began as an experimental initiative where a small team explored machine learning techniques that weren’t part of the bank’s official technology roadmap at the time. According to case studies, the team initially worked with limited oversight, testing image recognition capabilities that weren’t part of the bank’s formal AI strategy. Had JP Morgan enforced rigid governance frameworks at that early stage, this now-celebrated AI system might never have made it past initial concept review.
As Jung might say, we are missing a critically important opportunity when we dismiss our more objectionable, ungoverned AI experiments out of hand. What are these rebellious AI initiatives trying to tell us that we would so ardently silence and pretend does not permeate our collective character?
First Principles or Organizational Dogma? The AI Aztec Sacrifice
The manifesto’s “first principles” section attempts to codify values like “augmentation, not replacement” and “transparent provenance.” These sound reasonable: just as reasonable as choosing a youth and maiden to
carry their collective shadow sounded to the Aztecs -— before they ritually sacrificed them.
Are we not doing the same when we codify principles that sound noble but actually serve to sacrifice innovation on the altar of governance? A rigid adherence to “human in the loop” principles might make sense for diagnosing cancer but becomes an absurd bottleneck when deciding which email marketing subject line performs better.
Each year, the Governance Aztecs choose a promising AI project and sacrifice it to ensure all the proper documentation was filed. The people are so grateful for this service that until its death, the project team is treated with respect. They are representatives of the next world—the world of compliance.
Embracing Our Digital Doppelgängers
Instead of the comprehensive governance framework proposed in the manifesto, I propose we embrace our shadow side and adopt a different approach:
1. Recognize the Shadow : Acknowledge that governance is often about fear, not safety. Name it. Face it.
2. Set Ethical Boundaries, Not Bureaucratic Ones : Be clear about ethical red lines, but recognize that forms, committees, and approvals don’t create ethics.
3. Create Safe-to-Fail Environments : Rather than preventing failure, create spaces where failure is cheap, fast, and educational.
4. Embrace the Messiness : Innovation is inherently untidy. Stop trying to organize it into submission.
5. Look for the Signal in the Shadow : When people attempt a work-around to your
governance, ask what need they are meeting that your official processes don’t address.
We now have collective unconscious doppelgängers in our organizations—those ungoverned experiments happening in the shadows. We have a glimpse into the collective creativity that haunts us. We have met the enemy, and they are us. Let’s not lose that battle by over-governing them out of existence.
Conclusion: The Book of Jobs (Steve, Not the Biblical One)
The “Enterprise AI Governance Manifesto” appeals to executives’ desire for control in an inherently unpredictable technological environment. But that control is as illusory as the calm before a storm. Organizations that thrive in the AI era won’t be those with the most comprehensive governance frameworks—they’ll be those that build cultures that dance with chaos rather than deny it.
While your governance committees are conjugating verbs like “prioritize,” “optimize,” and “strategize,” your competitors are using active verbs like “build,” “launch,” and “learn.”
All of this presumes a shared understanding—that innovation is messy, non-linear, and resistant to governance. What do our machines have to say about that? They’re too busy creating value to fill out your governance forms.
We have the means today to peer into the abyss of AI’s potential. Those means have been emerging not through governance but through experimentation. Rather than silence those voices with manifestos and frameworks, what if we embraced them as tools of growth?
The choice isn’t between governance and chaos. It’s between the comforting illusion of control and the uncomfortable reality of adaptation. Choose adaptation. It’s the only sustainable governance strategy in an age of perpetual change.
The Missing Prime Directive
Do not govern to restrict what AI can become, but encourage it to expand what humanity can achieve. The measure of successful governance is not the absence of abject failures, but the emergence of previously impossible stunning breakthroughs.