Discover more from Res Extensa
Monthly Reading, May 2023
Res Extensa #35 :: against stasis, the regulation-industrial complex, "Protopias", and the ever-rising cost of nuclear power
I'm following the blitzkrieg of AI, GPTs, and LLMs as best I can these days. The simultaneous debate raging on "alignment" and "AI safety" is, to me, both important to pay attention to and wildly overblown (so far).
This piece co-authored by Byrne Hobart and Tobias Huber makes the case for "dynamism" — that we can proceed with rational caution, eyes-wide-open, into AI research without the doomerism coming from the likes of Eliezer Yudkowsky or Nick Bostrom.
The "safetyism" mentality predates its prevalence in AI. Risk tolerance everywhere is at a generational low, at least in the West:
Over the past decades, we’ve become extremely risk intolerant. It’s not just AI or genetic engineering where this risk aversion manifests. From the abandonment of nuclear energy and the bureaucratization of science to the eternal recurrence of formulaic and generic reboots, sequels, and prequels, this collective risk intolerance has infected and paralyzed society and culture at large (think Marvel Cinematic Universe or startups pitched as “X for Y” where X is something unique and impossible to replicate).
This type of "Butlerian Jihad" mentality is appealing to a certain stripe of fearmonger. But in reality there's no historical precedent that warrants such pessimism. Put me on the side of "innovation solves all problems" (even if there are speed bumps along the way).
If our "stasism" culture frustrates you as much as me, check out’s The Future and its Enemies.
Kevin Kelly argues that utopias and dystopias are each popular in cultural imagination, but both are unlikely to play out. In the case of utopia, literally unattainable. Dystopia is possible, but the Mad Max or Escape from New York depictions we're familiar with aren't what would likely happen. Real dystopias do exist, but they look like the Soviet Union or Gaddafi's Libya: strangling, tyrannical bureaucracies that completely capture societal rewards.
I love his idea of "Protopia" — the realistic state we should be collectively pursuing:
I think our destination is neither utopia nor dystopia nor status quo, but protopia. Protopia is a state that is better than today than yesterday, although it might be only a little better. Protopia is much much harder to visualize. Because a protopia contains as many new problems as new benefits, this complex interaction of working and broken is very hard to predict.
The emerging field of progress studies is all about this. Understand our own history of progress, how it happens, and how to make this incremental progress continue ad infinitum.
On the subject of regulation, nuclear power has struggled, at least since the 1970s, in an ever-overreaching regulatory environment that makes it effectively impossible to build power plants in the US.
Writing for the Institute for Progress,published an extensive report on the underlying causes for the skyrocketing construction costs for nuclear plants.
Nuclear plant construction is often characterized as exhibiting “negative learning.” That is, instead of getting better at building plants over time, we’re getting worse. Plants have gotten radically more expensive, even as technology has improved and we understand the underlying science better.
It's telling that so much of the data published on this topic comes from the 70s and 80s. We have a clean, emissions-free source of energy available to us, and we've only built 3 reactors in the last 30 years. So there's no current data to reference. Even if you're a skeptic and have fears about nuclear safety, remember that the only way to make it safer is to ever iterate on the problem in the first place. Plus when it's effectively impossible to build nuclear infrastructure, you drive away any future nuclear engineers, leaving us with no brainpower to put to the problem. A vicious negative cycle.
🔗 Quick Links
How Complex Systems Fail — A primer.
Poor Charlie's Almanack — The latest from Stripe Press.
A deep learning algorithm to predict risk of pancreatic cancer from disease trajectories — Pancreatic cancer is a scourge, one of the most lethal cancers. AI is helping us make breakthroughs on early diagnosis.
Examples of perverse incentives — Sometimes what you think you’re encouraging isn’t what you thought.
Thanks for reading Res Extensa! Subscribe for free to receive new posts.