Nuclear Meltdown: How AI Maximalism is Destroying Companies
Did you expect anything better from the management style which brought you Chernobyl?
“Show me the incentives, I’ll show you the outcome.” – Charlie Munger
Enterprise companies the world over are shoving AI down the throats of both employees and customers.
New chatbots & 1990s Microsoft Clippy AI helpers are everywhere, and in some cases I’m sure helping someone. Engineers in big tech are now being measured on PRs submitted per week and AI tokens spent.
Is anyone better off? Are we slow-rolling towards disaster?
Hire me as CTO or tech lead for your next project.
Focus on growing the business, never worry about the tech again.
Slide into my DMs on X or Substack. Select portfolio at Deca Labs.
The difference between elephants and mice
Everyone knows mice are nimble, they dart around. Elephants seem to be slow, lumbering, beasts. And yet, occasionally, they can move very quickly, fast enough to trample dozens of people to death who didn’t get out of the way.
Startups are mice. Big corporations are elephants.
While engineers at startups are usually free to use whatever they want to get the job done and ship the product, engineers in big enterprise companies are often hamstrung in unique ways.
On the one hand, they often are limited in what tools and technologies they can use, whether by wanting to limit tech stack fragmentation, or the IT & security departments wanting to justify their existence by blocking even the most benign tools because you didn’t file a JIRA ticket to get it on the whitelist.
Yet like the best 5 Year Plan from China, enterprises tend to rely much more on heavy handed top down mandates and policies. Unlike startups, well-known for granting more agency to employees out of necessity to move fast, ship or die; enterprises have established business lines and live under the sometimes true limitation that moving too fast could kill their golden goose.
So, while this usually plays out as being chained to JIRA and other dated tools, sometimes leadership gets anxious.
Maybe a competitor has pivoted. Maybe one of the core products has stopped growing. Maybe one of the execs just got a divorce and needs a way to squeeze out a quick win before bonus season.
Regardless of the reason, enterprises can sometimes very suddenly lurch in a new direction with a Code Red, Priority 00, new top-down mandate. Recently, that new mandate many places has been “Adopt AI, or else”.
Grow your Substack with Poaster.App
Use Poaster’s custom AI model to pull the best quotes from your long form writing, and automatically post them to your socials.
Skip the army of virtual assistants, start today for $1/month.
Doesn’t work? Doesn’t matter as long as it uses AI.
AI adoption is easy when it’s useful. Convince an engineer to spend a weekend playing with Claude Code and many will come back a believer for a lot of use cases.
AI adoption in the many cases where it is not useful is not organic. If it doesn’t work, nobody would use it unless forced. And so they are, with many misguided top-down mandates rippling across the enterprise world.
When the real constraints of current AI capabilities are ignored by leadership, many things start to happen.
First, rewarding AI usage, despite it not being useful (or overselling its benefits in demos), only serves as a toxic precedent. Instead of rewarding outcomes – like most companies did before, now tool choice is the top or dominant marginal criteria for promotions or layoffs. Instead of incentivizing real productivity gains, it only encourages exponential token spend.
Execs like seeing a line go up, so token spend as a proxy for productivity becomes the new KPI, despite not proving at all direct correlation between token spend and increased real productivity (effort turned into output) or customer impact.
Thank you to the readers who pay to make this newsletter possible.
Ready to turn your life around? Subscribe & start here and report back your wins.
– Fullstack
Second, skeptical employees are rooted out, dismissed by leadership as luddites, and increasingly ostracized and pushed out of the company. The AI psychophants and hype mongers though are held high, presented as the “New Soviet Man” to emulate, as they work late into the night prompting away testing the latest tools mentioned on X, Hacker News, or Reddit.
Are the AI agents their slave? Or as they increasingly are tethered to their computer watching he tokens wiz past, are they becoming a slave to their bots?
Third, lack of AI adoption is a perfect scape goat for management.
They tell the Board of Directors that stock price performance will turn around next quarter now that true 10x velocity has been unlocked with AI. When even aggressive AI adoption fails to improve any roadmap timelines, managers can turn around and blame the unvaccinated – or the non-psychophants for not AI-ing hard enough.
AI caused SEVs are shoved under the rug. PR code review standards start slipping as the rush to increase PR metrics incentivizes mutually assured destruction amongst team mates, stamping and landing each other’s slop and hoping to not be holding the Oncall pager when it gets deployed. Endless slop and even transparently net-negative projects are looked over because they have the magic glitter of AI sparkling on the mounds of shit.
Management can now rest easy knowing that their fickle hearts changing the roadmap every 7 weeks is not the problem. Product & Design can relax, it’s not their ego-trips to ship flashy rollouts that impress their Dribbble followers, but confuse customers, that is the issue. No, it’s those pesky engineers not adopting AI and shipping fast enough.
4.6 PRs per week? Higher! 25 per week!
And you’ll make the same number of bricks, but now without straw!
Far from freeing engineers from the burden of making sand think through their code, AI has become a fever dream for management that they have no intention of waking up from. The slave driver’s whip simply comes down harder with easy passing week of rising AI token spend but flat to down real productivity.
Software engineering at many companies is going through their own Luddite revolution, not unlike the introduction of the weaving machines which made obsolete tens of thousands of textile workers in the 1800s over a 20 year adoption period.
Yet, so far the outcome isn’t yet looking like unemployment, but a massive re-underwriting of the programmer / employer negotiating power dynamic.
Long gone seem to be the days of sheer desperation and competition amongst the biggest companies in the world fighting to employ programmers with fat salaries, equity packages, endless benefits, and over the top offices.
While AI may be leading to some real productivity gains in some cases, there seems to be a massive pricing in of future productivity gains by management who have cut hiring to bare bones levels, especially amongst the junior ranks.
And not unlike growth stocks, pricing in future gains is all fine and dandy until it’s not. When most of your present value is from revenues to be made far into the future, a small change in those expectations can crash the stock.
How will countries survive in a future of declining birth rates?
How will companies survive in a future of declining internal expertise on their complex systems?
Are you looking for 1:1 coaching (tech career, wifi-money, Canadian real estate)?
DM me on Substack or X with your topic, I’ll see if I can help. $120 for 30 mins.
– Fullstack
New Chernobyl Loading...
In many ways the Chernobyl disaster sits as a warning against the Soviet style culture of dissent suppression and total fealty to ignorant leadership decisions.
As has now been documented extensively, engineers and site managers closer to the nuclear power plant operations were consistently overruled and ignored by politburo apparatchiks who were sometimes hundreds or thousands of miles away, with no nuclear physics or engineering training.
Critical actions were delayed, held up, or the opposite ordered, leading to an exacerbated reactor meltdown disaster which destroyed the habitability of the surrounding town and area, and contributed to potentially tens of thousands of additional cancer deaths from radiation exposure on the unsuspecting public who had not been promptly evacuated to avoid global embarrassment, and many disaster workers killed in the weeks and months following from radiation poisoning who sacrificed their lives to stop the meltdown.
The modern ego-trip of management overruling engineers on the topic of AI adoption is shaping up to have many of the same fatal flaws.
Eng leadership who aren’t in the trenches actually using AI in existing 1M LOC codebases are quick to lurch from LinkedIn post to blog post conclusions and demand they be hastily implemented across the company, without even a care for success metrics or adversarial debate on whether this idea even applies anymore. AI models, guidance, and patterns are changing every few weeks and yet eng leadership is always behind the ball but need to appear like they are in charge to their management.
Worse still, is if they are called on the net-negative impact their rushed rollouts are happening, some can pull out the health insurance company bag of tricks and deflect, deny, and depose anyone who gets in their way. Complaints that “the engineers need to get with the program”, “we need to mature and not get left behind”, and “the time for debate is over” are not signs of strength, but of weakness.
The adoption of new technology and techniques must be through successful persuasion of engineers, not by force. Otherwise you risk killing your golden goose, namely competent engineers with deep knowledge of operating and improving the complex systems which run your billion-dollar business.
Relying on fake metrics to justify rule by fiat fools no one with a cursory knowledge of statistics. Relying on force and threats fools no one who knows that success metrics are critical for engineers to trust and align on a plan. Unfortunately, eng leadership often attracts those who are skilled at back stabbing and political maneuvering, not the technically competent or persuasive.
Eventually the A players start to leave for less toxic cultures or better opportunities. Those that remain work harder under the fear of layoffs. Management wins at least in the short term by headcount attrition reducing labor costs and more output from the remaining fearful employees. But, this rarely is a sustainable strategy.







