Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.
Are the biggest AI labs betting on the wrong horse?
Big AI companies are betting nearly all of their R&D and capital expenditure on the idea that pre-trained transformer models can deliver AI with human-level general intelligence. This approach relies heavily on backpropagation, the standard algorithm used to train deep neural networks.
Ben Goertzel, who coined the term “AGI” with his 2005 book Artificial General Intelligence (co-written with DeepMind founder Shane Legg), is skeptical. “The commercial AI industry is just betting everything on copying GPT [generative pre-trained transformers] in various permutations, which in my view is a waste of resources because all these LLMs are kind of doing about the same thing.”
“When something works, everyone wants to double and triple down on what worked,” he says. But this concentration of resources around a single paradigm may be risky. Transformer models require billions of dollars in compute to train, along with enormous ongoing computational resources to operate. So far, major AI labs have continued to see intelligence gains from adding more compute and training data. But as models grow larger, those gains are becoming increasingly expensive, raising the possibility that the returns may eventually no longer justify the cost. And because the financial stakes are so high, labs have little room to invest seriously in fundamentally different approaches.
Goertzel argues that scale alone is not enough without the right underlying algorithms. In his view, a major limitation of transformer models is that they cannot continually learn from new experiences and update their internal parameters in real time the way humans do. Instead, they revert to their baseline parameters with each new interaction, without meaningfully learning from prior exchanges.
Researchers at Google DeepMind, Microsoft, and Ilya Sutskever’s Safe Superintelligence are exploring alternative neural network architectures that may enable continual learning, Goertzel says. “DeepMind has incredible diversity within their AI team” and possesses a “deep bench” of experience with alternate AI paradigms, he says.
The result is an AI landscape in which massive compute resources are largely devoted to refining existing methods rather than pursuing fundamentally different architectures that may be better suited to the kind of human-level generalization required for true AGI. Goertzel remains optimistic that AGI could emerge within the next few years, but he believes it will likely require moving beyond simply scaling current LLMs.
Sakana’s new agents combine the intelligence of frontier AI models
Last week, Tokyo-based startup Sakana AI announced the beta release of its flagship commercial product, Sakana Fugu. The launch follows a relatively quiet stretch for the company, which was founded in 2023 by Llion Jones, one of the nine inventors of transformer models, alongside former Google DeepMind researcher David Ha.
Fugu is a multi-agent orchestration system designed to coordinate multiple frontier foundation models, including those from OpenAI, Google, and Anthropic, into a single collective intelligence engine. Within the system, these models function as agents working together on complex tasks spanning coding, mathematics, and scientific reasoning.
AI systems that combine multiple models in a pipeline are nothing new, but assigning tasks to specific models or switching between them has often required manual oversight. Fugu is designed to orchestrate those models autonomously, establishing collaboration topologies and routing subtasks to the model best suited for a given problem.
Another key feature is a looping mechanism that operates while the system works through a task. If it becomes stuck or fails to identify a promising path forward, it can recognize that impasse, launch corrective workflows, and iteratively work toward a stronger solution.
By combining the strengths of diverse models, Sakana AI says Fugu outperforms comparable systems on industry benchmarks including SWE-Pro, which measures real-world software engineering performance, and GPQA-D, which evaluates graduate-level scientific reasoning.
Peter Thiel is backing an AI startup that fact-checks journalists
Influential VC Peter Thiel is backing a new startup called Objection AI, whose stated mission is to “restore confidence in the Fourth Estate.” At least, that’s how the company’s CEO framed it to TechCrunch.
Objection AI is led by lawyer-turned-entrepreneur Aron D’Souza, who helped spearhead the Thiel-backed lawsuit that ultimately bankrupted Gawker Media. That legal crusade followed a 2007 Gawker article that outed Thiel as gay. While Thiel did not sue Gawker directly at the time, he secretly financed multiple lawsuits against the publisher.
If someone believes the media has published damaging or false claims about them, they can pay Objection AI $2,000 to launch an AI-assisted investigation. The company says it deploys a team of AI models to analyze facts gathered by crowdsourced “investigators,” ultimately producing a judgment styled as an official certificate. The ruling carries no legal authority, but it can be widely circulated on social media as a reputational defense tool.
D’Souza argues that media organizations can too easily damage reputations, particularly when reporting relies on anonymous sources and later proves inaccurate. (And there is indeed some logic to that critique.) Objection offers clients a mechanism to challenge coverage and initiate a public-facing review process, potentially providing a faster response than a prolonged libel lawsuit.
But critics point out that Objection may do more to suppress truth than combat misinformation. By pressuring journalists to reveal sources or discouraging whistleblowers from coming forward, such a system could create a chilling effect on investigative reporting.
The real product here probably isn’t objective fact-checking. It’s more likely a database of journalist credibility scores that can be weaponized to discredit reporters, support litigation, intimidate sources, or give powerful figures another avenue to challenge unfavorable reporting. “Your reporter has a 62% credibility rating” could become a potent talking point in a defamation case or PR offensive. Objection AI already lists a number of active investigations on its website, and there is little transparency around whether any could evolve into litigation, potentially backed by wealthy interests operating behind the scenes.
On behalf of journalists everywhere, many thanks to Thiel and D’Souza for their tireless efforts to restore public trust in the press. Now do VCs and lawyers.
More AI coverage from Fast Company:
- Celebrities like Taylor Swift are setting the guardrails for the AI age
- Why Manus has become a crucial prize in the global AI race
- The hidden logic behind AI CEOs’ job loss warnings
- PayPal says AI shopping agents are creating an invisible storefront economy
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
