Artificial intelligenceautomationDomestic PolicyFeaturedinnovationresearch and development

The Coming Acceleration: An Update

When I wrote about recursive artificial intelligence last September, I described an innovation still in its early stages, powerful in concept, but not fully realized in practice. Six months on, AI has made meaningful strides toward true recursivity. The acceleration has, well, accelerated.

A quick refresher: recursive AI refers to systems that can improve themselves, testing their own outputs, identifying weaknesses, and generating better versions without waiting for human researchers to do the work. The concern was twofold: The systems would be powerful and so fast that communities, individuals, and workers would have little time to adapt.

Here is what has changed since then.

A brief digression into the history of theoretical mathematics helps illustrate how recursive AI is developing in real time. In 1969, Volker Strassen discovered a faster algorithm for matrix multiplication, the core computational operation underlying modern AI systems. His insight reduced the arithmetic steps required: Seven scalar multiplications instead of eight at each recursive step, a seemingly small gain that compounds across billions of calculations. The practical record set by that work went essentially unchallenged for 56 years.

Google DeepMind’s AlphaEvolve, announced in May 2025, finally surpassed it. Rather than relying on flashes of insight from human researchers, AlphaEvolve uses automated evaluators that generate and test thousands of candidate solutions, selecting the best and iteratively improving them. The result: a new algorithm for multiplying 4×4 complex-valued matrices using 48 scalar multiplications, one fewer than Strassen’s long-standing benchmark of 49. That single-digit improvement had eluded mathematicians for more than half a century.

The mechanism AlphaEvolve uses is a more powerful version of something now available to anyone working with AI-assisted code. I recently started using a similar approach in my own workflow automation, and the only word for it is “magical.” You provide the AI with existing code and instruct it to fix and verify the result. The system iterates until it has a working product. This is AlphaEvolve in miniature.

So what happens when the frontier AI labs start applying these self-guided recursive systems to their own research and engineering work? METR, which monitors progress in advanced AI models, has been tracking the answer. Its original March 2025 study found that the duration of tasks AI agents can complete autonomously has roughly doubled every seven months over the prior 6 years.

More recent data suggests that the pace has accelerated: in 2024 and 2025, the doubling time shortened to approximately four months. By late 2025, the most capable models were reliably completing tasks complex enough to require five hours of skilled professional work. This isn’t a speed question but a measure of the difficulty of challenges AI is able to take on. As a consequence, the time between major improvements in model performance is shrinking.

As a whole, the average interval between major releases by the frontier labs has fallen from roughly 21 months three years ago to 6-12 months today, not counting multiple deployments of interim improvements between major releases. If that pace continues, AI advancement could be five to ten times faster than anything we have seen before within a single year. Now imagine what might be possible when human and AI researchers combine to create coding labs operating continuously and at speed.

The next step in this process is to extend ever-accelerating AI agents into businesses and other organizations everywhere. There may be a limit to this recursive improvement and its implications, but that horizon isn’t yet visible. It may not exist.

The post The Coming Acceleration: An Update appeared first on American Enterprise Institute – AEI.

Source link

Related Posts

1 of 233