AI

Things Are Finally Looking Up for Intel and It Has CPUs to Thank for It

At a glance:

  • Intel reports a 7.2% revenue increase and expects next-quarter revenue to beat market expectations, driven by demand for CPUs in agentic AI.
  • CPUs are re-emerging as the orchestration and control-plane foundation for AI stacks, shifting the bottleneck from GPUs to context management.
  • Key partnerships with Tesla and SpaceX, plus a U.S. government equity stake, support Intel’s ambitious chip-factory project.

What Happened

Things had not been looking good for Intel for the past few years. Once the star American chipmaker, Intel fell behind competitors like AMD and Nvidia in the age of AI. But now, the chipmaker says a shift in AI is actually helping their catch-up effort, and it’s the rising popularity of agentic AI. In the company’s latest earnings call on Thursday, Intel reported a revenue increase of 7.2% and said it expects next quarter’s revenue to be above market expectations. Intel does have a lot of other factors to thank for its sunny outlook, like lucrative deals with Elon Musk’s Tesla and SpaceX for an incredibly ambitious chip factory project that was recently announced, and an agreement with the Trump administration that saw the United States take a 10% stake in the company. But the chipmaker is attributing much of its success in the past quarter to mounting demand for central processing units (CPUs).

General processing units, or GPUs, have become the chips most closely associated with the AI boom, and, for a while, the CPUs that had been powering most of the tech coming out of Silicon Valley for decades prior were relegated to second-class status. Now, experts are claiming that CPUs are going through a renaissance as artificial intelligence enters a new phase, one marked by intense hype for agentic AI systems like Openclaw and Anthropic’s Claude Code. The demand for CPUs is strong because the chips are more efficient at “some of the orchestration, control plane,” and data management tasks that are more crucial in agentic systems, Intel CEO Lip-Bu Tan said in the company’s earnings call.

Earlier this week, Morgan Stanley analysts said that they expect the AI bottleneck to shift from GPUs to CPUs, claiming that agentic AI systems will require more focus on coordination than just simple computing power, and CPUs can help act as that control layer. Nvidia executives have been whistling a similar tune for months.

The CPU Renaissance in AI

“The bottleneck is shifting from compute to context management,” Nvidia’s senior director of HPC and AI hyperscale infrastructure solutions Dion Harris said in a press briefing at CES in January. Then, at Nvidia’s GPU Technology Conference last month, the company’s CEO Jensen Huang said that he expects agentic AI to drive $1 trillion in revenue for the company, right before announcing a major push into CPUs. Intel’s numbers in the latest quarter are a strong indicator that this new stage of AI hype foretold by tech insiders over the last few months might already be underway.

“For the last few years, the story around high-performance computing was almost exclusively about GPUs and other accelerators,” Intel CEO Lip Bu-Tan said in the company’s earnings call. “In recent months, we have seen clear signs that the CPU is reinserting itself as the indispensable foundation of the AI era. CPU now serves as the orchestration layer and critical control plane for the entire AI stack. This is not just our wishful thinking; it is what we hear from our customers, and it is evident in the demand profile.” The ratio of CPUs to GPUs that used to be required in the past years was one-to-eight, Tan said in the earnings call. Now, that ratio has allegedly gone up to one-to-four. So that doesn’t mean the importance of GPUs is going away. But it does mean that you should prepare yourself to hear the word CPU an awful lot more.

Market and Partner Implications

The shift underscores a broader recalibration of the AI hardware landscape, where specialized workloads are fragmenting across architectures. Intel’s reported deals with Tesla and SpaceX for a massive new chip fabrication facility highlight how traditional CPU strengths in reliability and system-level control are being leveraged for next-generation AI infrastructure. The U.S. government’s 10 percent equity stake, tied to a recent policy agreement, further entrenches Intel’s role in national strategic capacity.

Outlook and What to Watch

Going forward, the conversation is no longer just about teraflops but about how efficiently control-plane tasks—scheduling, memory coherence, and data routing—are handled. As agentic AI matures, the architectures that can balance massive parallelism with nimble orchestration will define winners. Intel is positioning its renewed CPU roadmap as central to that balance, betting that the next generation of AI systems will run on a hybrid foundation rather than a single-engine paradigm.

Editorial SiliconFeed is an automated feed: facts are checked against sources; copy is normalized and lightly edited for readers.

FAQ

Which specific agentic AI systems are mentioned in the article?
The article names Openclaw and Anthropic’s Claude Code as examples of agentic AI systems driving CPU demand.
What are the new CPU-to-GPU ratios cited by Intel leadership?
Intel CEO Lip-Bu Tan stated the historical ratio was one CPU to eight GPUs, and it has now shifted to one-to-four.
What partnerships and government agreements support Intel’s outlook?
Intel has lucrative deals with Tesla and SpaceX for a new chip factory, plus a U.S. government agreement securing a 10% equity stake.

More in the feed

Prepared by the editorial stack from public data and external sources.

Original article