AI

AI could democratize one of tech's most valuable resources

At a glance:

  • Wafer uses reinforcement learning on open‑source models to generate kernel code for any silicon
  • Nvidia’s software advantage is being questioned as AMD, Amazon Trainium and Google TPU claim comparable FLOPS
  • Ricursive Intelligence has raised $335 million to automate chip design with large language models

Why AI could democratize chip resources

Nvidia has long been the de‑facto standard for AI accelerators, its GPUs powering everything from large‑scale language models to image generators. The company’s market cap has topped $4 trillion, driven by a virtuous cycle: new GPU generations enable bigger models, which in turn demand even more powerful silicon. A key part of Nvidia’s moat has been its software stack – libraries, compilers and tools that make it easier for engineers to squeeze performance out of its chips.

Wafer’s AI‑driven code optimisation

Wafer, a stealth‑mode startup, is trying to flatten that advantage. Co‑founder and CEO Emilio Andere explains that Wafer trains reinforcement‑learning agents on open‑source models to write kernel code – the low‑level software that talks directly to hardware. The company also “adds agentic harnesses” to existing coding models such as Anthropic’s Claude and OpenAI’s GPT, boosting their ability to emit code that runs efficiently on a target processor.

Wafer is already collaborating with:

  • AMD
  • Amazon

and has secured $4 million in seed funding from notable backers including:

  • Jeff Dean (Google AI)
  • Wojciech Zaremba (OpenAI)
  • other undisclosed investors

Andere argues that this approach could erode Nvidia’s software moat, especially as more high‑end chips now match Nvidia’s theoretical floating‑point performance.

High‑end chips that rival nvidia’s FLOPS

According to Andere, the following platforms deliver comparable raw FLOPS to Nvidia’s flagship GPUs:

  • AMD’s top‑tier GPUs
  • Amazon Trainium ASICs
  • Google TPUs

He adds, “We want to maximize intelligence per watt,” highlighting a shift from raw compute to efficiency‑centric design.

Ricursive intelligence and the next frontier of chip design

While Wafer focuses on code, another startup, Ricursive Intelligence, is tackling the upstream problem of chip architecture itself. Founded by ex‑Google engineers Azalia Mirhoseini and Anna Goldie, Ricursive uses large language models to automate physical design and verification – two of the most labor‑intensive stages of chip creation.

The company’s technology already helps Google optimise component layout, and it aims to let engineers describe design changes in natural language. Mirhoseini says the goal is a “recursive kind of AI improvement” where the same models that design chips also write the software that runs on them.

Ricursive has attracted massive capital, raising $335 million at a $4 billion valuation within months of its launch. Goldie envisions a scaling law for chip design: more compute spent on design yields faster, more efficient silicon.

Implications for nvidia’s moat and the broader ecosystem

If AI can both write highly optimised kernel code and co‑design the silicon it runs on, the differentiation that Nvidia has built around its software ecosystem may diminish. Companies like Anthropic have already had to rewrite Claude’s code from scratch to run on Amazon Trainium, a process that could become routine as AI‑assisted optimisation matures.

The democratization of these capabilities could lower the barrier to entry for smaller players, enable more custom silicon across industries, and ultimately spread AI compute power more evenly. Nvidia will likely respond by tightening its hardware‑software integration or by offering new developer incentives, but the race to make chip design and optimisation universally accessible appears to be accelerating.

Outlook

Investors are watching closely. Wafer’s modest seed round suggests early‑stage confidence, while Ricursive’s massive Series A signals that the market believes AI‑driven chip design could become a multi‑billion‑dollar segment. For enterprises, the emerging toolchain may translate into lower total cost of ownership for AI workloads, as they can choose the most efficient processor without hiring scarce performance engineers. For Nvidia, the challenge will be to prove that its ecosystem still delivers superior productivity gains that justify its premium position.

Editorial SiliconFeed is an automated feed: facts are checked against sources; copy is normalized and lightly edited for readers.

FAQ

What technology does Wafer use to optimise kernel code?
Wafer trains reinforcement‑learning agents on open‑source models to generate kernel code that runs directly on a target silicon. It also augments existing coding models such as Anthropic’s Claude and OpenAI’s GPT with “agentic harnesses” to improve their ability to produce hardware‑specific code.
Which high‑end chips now claim similar FLOPS to Nvidia GPUs?
According to Wafer’s CEO, AMD’s top GPUs, Amazon’s Trainium ASICs and Google’s TPUs all deliver theoretical floating‑point performance comparable to Nvidia’s flagship GPUs, shifting the performance benchmark away from Nvidia alone.
How much funding has Ricursive Intelligence raised for its AI‑driven chip‑design platform?
Ricursive Intelligence has raised $335 million in a funding round that valued the company at $4 billion, positioning it as a major player in the emerging field of AI‑automated chip design.

More in the feed

Prepared by the editorial stack from public data and external sources.

Original article