Situational Blind-Spots
— Niels Bohr (maybe)
Recently I’ve started binging “Foundation” on AppleTV, enjoying it with as much ardor as when I first read Asimov’s trilogy as a child. If you are not familiar with the Foundation Trilogy, forgive the spoiler alerts, but it does happen to be one of the foundations (no pun) of modern science fiction. While lauded for its grand scope and imaginative world-building, (particularly the concept of psychohistory), it has also been criticized for flat characters and stiff dialogue, leading to a disconnected narrative due to significant jumps between stories. To me, however, it was an awakening of sorts to the child who consumed the volumes, inspired by the possibilities of the math behind psychohistory, as it is today a reminder of so many lessons learned along the way. The Foundation series on AppleTV is what Star Wars might have been had their target audience was a tad more cognitively gifted. But I digress.
With this blog entry I am comparing Hari Seldon’s character, the fictional genius mathematician creator of psychohitory, to Leopol Aschenbreen, the real-life genius mathematicion, creator of the 2024 manfisto, “Situational Awarenemss: The Decade Ahead.” And by comparing the two, I would extol their virtues but also highlight their blind spots.
Let is not begin at the blind spots -— Seldon and Aschenbrenner share more than just their forecasting foibles; they’re cut from the same visionary cloth. Both are intellectual heavyweights wielding data like a crystal ball, peering into chaotic futures and betting big on patterns to steer the ship.
Seldon’s psychohistory crunches galactic crowds into millennia-spanning predictions, while Aschenbrenner rides the wave of AI scaling laws, plotting OOM (orders of magnitue) jumps toward AGI by 2027. It’s that same hubris of the model-maker: assume the trends hold, and voila, you’ve got a roadmap to salvation. Dig deeper, and you see their shared playbook for dodging doomsday.
Hari Seldon hatches his Plan with Foundations as bulwarks against barbarism, shortening a 30,000-year nightmare to a mere millennium. Aschenbrenner? He’s all in on “The Project,” a U.S.-centric AI fortress to lock down super-intelligence before it slips authoritarian hands. Both cook up elite backups too -— the shadowy Second Foundation for psychic tweaks, or Aschenbrenner’s brain trusts and his own hedge fund empire, quietly betting billions on the infrastructure to make his prophecies self-fulfilling.
Ah, the allure of grand predictions. We’re all prediction engines at the end of the day, aren’t we? As I’ve been diving back into Isaac Asimov’s Foundation trilogy lately and remembering Hari Seldon, the mastermind behind it all, I appreciate how he crafts this elegant system to forecast humanity’s path across millennia. But as any fan knows, it’s got a glaring hole: the unpredictable wildcard, like that psychic mutant the Mule, who upends everything because psychohistory thrives on masses, not mavericks.
Fast forward to our own era, and here’s Leopold Aschenbrenner with his 2024 manifesto.
It’s a bold roadmap to AGI by 2027, super-intelligence exploding shortly after, all powered by scaling compute and algorithms like we’re on some unstoppable tech escalator.
Hmmmm. Sound familiar? Both Seldon and Aschenbrenner are playing the long game, betting on patterns -— statistical crowds for one, Moore’s Law-ish trends for the other -— to shape futures we can barely grasp. On a meta-level, their blind spots mirror each other in fascinating ways.
Seldon’s psychohistory assumes a predictable human swarm, blind to the rogue individual who bends reality with unique powers. Aschenbrenner? He’s extrapolating from GPT leaps, assuming steady OOMs in progress will bulldoze us to god-like AI without hitting walls. Critics point out he’s underplaying alignment nightmares—those moments when super-smart systems go rogue, not unlike the Mule hacking minds.
Or geopolitical curveballs: what if China’s espionage or some international treaty derails the U.S.-led race he envisions? It’s the same oversight: overconfidence in the model, ignoring the chaos of outliers, whether mutants, messy human politics or devastating solar flares that wipe out global electric grids.
Then there’s the societal ripple. Seldon sets up Foundations to nudge history, but even he needs a hidden Second Foundation to patch the glitches. Aschenbrenner calls for “The Project” -— a nationalized AI push -— to secure a liberal utopia, but he glosses over economic tsunamis, like job wipeouts or power grids buckling under trillion-dollar clusters.
Both visions are top-down, engineered salvations that might falter on the human element: unforeseen ethics, sentience debates, or just plain stagnation if the data dries up. Ephemeralization, as Bucky Fuller would say -— that drive to do more with less through tech —- underpins both. Seldon’s math compresses chaos into order; Aschenbrenner’s scaling laws promise infinite smarts from finite silicon. Yet the blind spot persists: what if the “more” introduces variables that shatter the frame? I’ve tinkered with NLP models in my own work, watching them surprise me with outputs that veer off-script. It’s a humbling reminder that even our best forecasts have shadows.
One lesson I have learned along the way is really quite simple. While top-down is appealing for narratives and planning, life is a bottom-up process. No matter how well we plan, we know only of one way it won’t actually happen. By codifying the details for roadmaps, we create fiction. That’s not to say we should not plan. Only that plans are like maps and maps are not territories.
On its foundational level, Aschenbrenner’s treatise is built on a cornerstone of Large Language Models (LLMs), their existence and possibilities. And therein lies the source of his so obvious blind-spot. All his conclusions are based on the math that ensues from his basic assumptions.
Sure, LLMs stand on a bedrock of solid math -— think linear algebra juggling those massive word embeddings, calculus driving the gradient descent that tunes the beast during training, and a dash of information theory to measure how well it crunches data. It’s all deterministic at this structural level: plug in the numbers, and the computations churn out predictably, like a well-oiled machine. But don’t let that fool you; this certainty is just the scaffolding for the real show.
At their heart, LLMs thrive on the wild uncertainty of probabilities, modeling language as a probabilistic dance where every next word is a bet based on patterns from oceans of training data. Softmax turns raw scores into likelihoods, and sampling introduces that delightful randomness -— why else would the same prompt spit out variations? It’s this embrace of ambiguity that lets them mimic human chatter, handling the messy, context-riddled nature of words far better than rigid rule-based systems ever could. Sure, it leads to hallucinations or inconsistencies, but that’s the price of flexibility in an imperfect world.
In the end, blending mathematical precision with probabilistic fuzziness makes LLMs powerful yet quirky tools. They’re not calculators delivering absolute truths; they’re more like savvy gamblers at the language casino, stacking odds from data to generate responses that feel eerily insightful. As AI evolves, tweaks like fine-tuning aim to dial down the uncertainty, but the probabilistic soul remains -— much like Buckminster Fuller’s “ephemeralization,” doing more with less certainty.
Alas, I’m no Seldon or Aschenbrenner, just a data wrangler pondering the arcs. But if history (or sci-fi) teaches anything, it’s to watch for the Mules in the machine.
Leave a Reply