Hello again, and welcome back to Fast Company’s Plugged In.
A February 9 blog post about AI, titled “Something Big Is Happening,” rocketed around the web this week in a way that reminded me of the golden age of the blogosphere. Everyone seemed to be talking about it—though as was often true back in the day, its virality was fueled by a powerful cocktail of adoration and scorn. Reactions ranged from “Send this to everyone you care about” to “I don’t buy this at all.”
The author, Matt Shumer (who shared his post on X the following day), is the CEO of a startup called OthersideAI. He explained he was addressing it to “my family, my friends, the people I care about who keep asking me ‘so what’s the deal with AI?’ and getting an answer that doesn’t do justice to what’s actually happening.”
According to Shumer, the deal with AI is that the newest models—specifically OpenAI’s GPT-5.3 Codex and Anthropic’s Claude Opus 4.6—are radical improvements on anything that came before them. And that AI is suddenly so competent at writing code that the whole business of software engineering has entered a new era. And that AI will soon be better than humans at the core work of an array of other professions: “Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service.”
By the end of the post, with a breathlessness that reminded me of the Y2K bug doomsayers of 1999, Shumer is advising readers to build up savings, minimize debt, and maybe encourage their kids to become AI wizards rather than focus on college in the expectation it will lead to a solid career. He implies that anyone who doesn’t get ahead of AI in the next six months may be headed for irrelevance.
The piece—which Shumer told New York’s Benjamin Hart he wrote with copious assistance from AI—is not without its points. Some people who are blasé about AI at the moment will surely be taken aback by its impact on work and life in the years to come, which is why I heartily endorse Shumer’s recommendation that everyone get to know the technology better by devoting an hour a day to messing around with it. Many smart folks in Silicon Valley share Shumer’s awe at AI’s recent ginormous leap forward in coding skills, which I wrote about last week. Wondering what will happen if it’s replicated in other fields is an entirely reasonable mental exercise.
In the end, though, Shumer would have had a far better case if he’d been 70% less over the top. (I should note that the last time he was in the news, it was for making claims involving the benchmark performance of an AI model he was involved with that turned out not to be true.) His post suffers from a flaw common in the conversation about AI: It’s so awestruck by the technology that it refuses to acknowledge the serious limitations it still has.
For instance, Shumer suggests that hallucination—AI stringing together sequences of words that sound factual but aren’t—is a solved problem. He writes that a couple of years ago, ChatGPT “confidently said things that were nonsense” and that “in AI time, that is ancient history.”
It’s true that the latest models don’t hallucinate with anything like the abandon of their predecessors. But they still make stuff up. And unlike earlier models, their hallucinations tend to be plausible-sounding rather than manifestly ridiculous, which is a step in the wrong direction.
The same day I read Shumer’s piece, I chatted with Claude Opus 4.6 about newspaper comics—a topic I often use to assess AI since I know enough about it to judge responses on the fly—and it was terrible about associating cartoonists with the strips they actually worked on. The more we talked, the less accurate it got. At least it excelled at acknowledging its errors: When I pointed one out, it told me, “So basically I had fragments of real information scrambled together and presented with false confidence. Not great.”
After botching another of my comics-related queries, Claude said, “I’m actually getting into shaky territory here and mixing up some details,” and asked me to help steer it in the right direction. That’s an intriguing glimmer of self-awareness about its own tendency to fantasize, and progress of a sort. But until AI stops confabulating, describing it as being “smarter than most PhDs,” as Shumer does, is silly. (I continue to believe that human capability is not a great benchmark for AI, which is already better than we are at some things and may remain permanently behind in others.)
Shumer also gets ahead of himself in his assumptions about where AI might be in the short-term future when it comes to being competently able to replace human thought and labor. Writing about the kind of complex work tasks he recommends throwing AI’s way as an experiment, he says, “If it even kind of works today, you can be almost certain that in six months it’ll do it near perfectly.” That seems extraordinarily unlikely, given that all kinds of generative AI have been stuck in the “kind-of-works” era for years now. A decent rule of thumb: Don’t believe AI will be able to do something well until it actually does.
Ultimately, the takeaway from Shumer’s post I’ll remember most isn’t anything he wrote. In the spirit of AI experimentation, I fed his piece to ChatGPT, Gemini, and Claude with the prompt “Give me an analysis/critique of this essay. Tell me whether it’s overly cautious, not cautious enough, what your own take is on the subjects discussed, etc.” I was prepared for them all to respond with something facile but shallow, more a bullet-point summary than anything. ChatGPT and Gemini lived up to those expectations.
However, Claude’s swing at the topic—which it popped out in seconds, and you can read in its entirety here—startled me. It took issue with some of the same things about the piece that bothered me, but also raised other weaknesses I hadn’t considered. I felt smarter after reading it, which still isn’t what I expect from AI prose.
A few nuggets:
Coding is a uniquely favorable domain for AI — the outputs are verifiable, the feedback loops are tight, the training data is abundant, and the environment is fully digital. Lawyering, medicine, and management involve ambiguity, institutional context, human relationships, regulatory accountability, and adversarial dynamics that are qualitatively different. He acknowledges this briefly and then waves it away.
….
Yes, many people tried GPT-3.5 and wrote off the whole field. That’s a real problem. But plenty of serious, current users — including people paying for the best models — still encounter significant limitations: hallucination (reduced but not eliminated), brittleness on novel problems, inability to maintain coherent context over truly long projects, poor calibration about its own uncertainty, and difficulty with tasks requiring real-world grounding. Shumer treats all skepticism as outdated, which is intellectually dishonest.
….
He also doesn’t seriously engage with the possibility that the economic incentives might not play out as smoothly as “AI can do the job, therefore the job disappears.” Adoption friction is real. Liability concerns are real. Organizational inertia is real. The history of technology suggests that even transformative tools take much longer to restructure industries than the pure capability timeline would suggest. Electricity took decades to reshape manufacturing even after it was clearly superior to steam.
…
I think the underlying signal Shumer is pointing at is real. AI capabilities are improving faster than most people outside the industry appreciate, and the gap between public perception and current reality is genuinely large. He’s right that most people should be engaging with these tools more seriously than they are.
But I think the essay is doing a thing that’s very common in Silicon Valley: mistaking the potential trajectory for the inevitable trajectory, and compressing timelines based on vibes and extrapolation rather than grappling seriously with the many real-world frictions that slow adoption. The piece reads like it was written in the immediate emotional aftermath of being impressed by a new model release — and those moments tend to produce overconfidence about pace.
To recap: In the same day that I found Claude Opus 4.6 writing something about Shumer’s piece that was not only coherent but insightful, it also devolved into a hallucinogenic fit. That’s just how AI is these days: amazing and terrible at the same time. Somehow, that reality is tough for many observers to accept. But any analysis that ignores it is at risk of badly misjudging what will come next.
You’ve been reading Plugged In, Fast Company’s weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you’re reading it on fastcompany.com—you can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard.
More top tech stories from Fast Company
Developers are still weighing the pros and cons of AI coding agents
The tools continue to struggle when they need to account for large amounts of context in complex projects. Read More →
AI expert predicted AI would end humanity in 2027—now he’s changing his timeline
The former OpenAI employee has rescheduled the end of the world. Read More →
Discord is asking for your ID. The backlash is about more than privacy
Critics say mandatory age verification reflects a deeper shift toward routine identity checks and digital surveillance. Read More →
A Palantir cofounder is backing a group attacking Alex Bores over his work with . . . Palantir
Current and former employees tell Fast Company the ad campaign is driven by opposition to the Democratic hopeful’s support for AI regulation. Read More →
Facebook’s new profile animation feature is Boomerang for the AI era
The feature is part of a wider push toward AI content in Meta apps. Read More →
MrBeast’s business empire stretches far beyond viral YouTube videos
Banking apps, snack foods, streaming hits, and data tools are all part of Jimmy Donaldson’s growing $5 billion portfolio under Beast Industries. Read More →
