A memristor that works at 700°C: TetraMem's AI chips could compute where GPUs cannot
At a glance:
- A memristor built at the University of Southern California operates reliably at 700 degrees Celsius — hotter than molten lava and more than 200 degrees beyond Venus surface temperatures — and held data for over 50 hours without refresh.
- TetraMem, the startup commercializing the technology, has moved its room-temperature AI inference chips to 300mm production wafers through a partnership with SK hynix and support from the CHIPS and Science Act.
- The device uses a single-atom-thick graphene layer to block tungsten atom migration inside a hafnium oxide memristor, and its in-memory architecture performs AI matrix multiplication physically rather than sequentially in silicon logic.
The 700-degree memristor
Every probe humanity has sent to Venus has eventually died. The Soviet Venera landers survived between 23 minutes and two hours on a surface where temperatures exceed 460 degrees Celsius — heat sufficient to melt lead. Their electronics, purpose-built to endure that environment, still failed. The longest-lived mission in the history of Venus exploration lasted 127 minutes, after which the chips stopped working and the data stopped flowing.
A team at the University of Southern California has now built a memory chip that operates reliably at 700 degrees Celsius — hotter than molten lava and more than 200 degrees beyond anything Venus could produce. The device, published in Science on 26 March 2026, held data for more than 50 hours at that temperature without any refresh cycle, survived more than one billion switching cycles, and ran on just 1.5 volts with switching speeds measured in tens of nanoseconds. Crucially, 700 degrees was not the device's operational limit; it was the limit of the testing equipment available to the team.
How the graphene barrier works
The device is a memristor — a nanoscale component that stores information and performs computation simultaneously. Joshua Yang's team at USC built it from three layers: tungsten on top, hafnium oxide ceramic in the middle, and a single-atom-thick sheet of graphene on the bottom. Each material was chosen for a specific reason. Tungsten has the highest melting point of any metal. Hafnium oxide is already a standard insulator in semiconductor fabrication. Graphene, like diamond, withstands enormous heat without degrading.
In conventional memory devices, heat causes metal atoms from the top electrode to migrate through the ceramic layer until they reach the bottom electrode, creating a permanent short circuit that kills the device. Graphene prevents this. Its surface chemistry with tungsten is, as Yang described it, almost like oil and water — the tungsten atoms find nothing to anchor to and migrate away. No anchor means no short circuit, and no short circuit means no failure.
The team did not merely observe the effect. Using electron microscopy, spectroscopy, and quantum-level computer simulations, they mapped the atomic interface between graphene and tungsten to understand exactly why it works. That mechanistic understanding opens the door to identifying other materials with similar surface chemistry, potentially making the device easier to manufacture at industrial scale. Two of the three materials — tungsten and hafnium oxide — are already standard in semiconductor foundries worldwide. Graphene is on the development roadmaps of both TSMC and Samsung.
In-memory computing and the AI advantage
The extreme-temperature result is the headline, but the commercial significance of the memristor lies elsewhere. More than 92 percent of the computing in AI systems is matrix multiplication — the core mathematical operation behind everything from image recognition to large language models. Today's digital processors perform it sequentially, step by step, consuming enormous amounts of energy in the process. A memristor performs it physically. When electricity flows through the device, Ohm's Law — voltage multiplied by conductance — produces the answer as a current. The multiplication happens in the instant the electricity passes through. No clock cycles, no memory bus, no energy wasted shuttling data between processor and storage.
This is in-memory computing: the data stays where the computation happens, eliminating the von Neumann bottleneck that constrains every conventional processor architecture. The result is inference that is orders of magnitude faster and more energy-efficient than GPU-based systems performing the same calculations.
The International Energy Agency projects that energy use from data centres will double by 2026, driven overwhelmingly by the computational demands of AI training and inference. The AI industry's prevailing answer has been to build larger data centres, secure more power, and negotiate nuclear energy contracts. A memristor-based architecture attacks the problem at a fundamentally different level — not by supplying more energy to the same kind of chip, but by building a chip that needs orders of magnitude less energy to perform the same computation.
AI demand has also driven a 90 percent surge in memory prices and a global DRAM shortage, forcing manufacturers to redirect capacity toward high-bandwidth memory for AI accelerators. The memristor represents a fundamentally different approach. Instead of separating memory from processing and shuttling data between them at enormous energy cost, it combines them. The architecture does not compete with DRAM for capacity — it competes with GPUs for the AI inference workload itself.
From lab to fab: TetraMem's production roadmap
Yang co-founded TetraMem with three co-authors of the original memristor research: Qiangfei Xia, Miao Hu, and Ning Ge. The company, headquartered in San Jose, has built working in-memory computing chips that students in Yang's lab use daily to run machine learning tasks.
TetraMem's key partnerships include:
- SK hynix — the world's second-largest memory manufacturer — on a joint research project to advance in-memory computing for AI.
- Andes Technology — to integrate the memristor architecture with a RISC-V vector processor.
- NY CREATES at the Albany NanoTech Complex — where the company successfully upscaled its technology from 200mm to 300mm wafers, the industry-standard platform for mass manufacturing.
The NY CREATES partnership is particularly significant. It was supported under the CHIPS and Science Act's goal of strengthening the domestic semiconductor ecosystem, and demonstrated what NY CREATES calls a split-fab model: companies develop and test chips at Albany before transferring the processes to a foundry partner for mass production. TetraMem's memristors are no longer a laboratory curiosity — they are on 300mm wafers.
The US government's CHIPS Act investments have so far reshaped the domestic semiconductor landscape primarily through billions flowing into logic chip fabrication. TetraMem's path through NY CREATES shows that the Act's ambitions extend beyond logic: the infrastructure built to reshore chip manufacturing also enables fundamentally new computing architectures to reach production scale.
Market momentum and the competitive field
The global memristor market was valued at 420 million dollars in 2025 and is projected to reach 4.5 billion dollars by 2030 and 21.7 billion dollars by 2035, growing at a compound annual rate of more than 48 percent. The broader analog AI chip market is expected to grow from 251 million dollars in 2025 to 2.5 billion dollars by 2035. These numbers are small relative to the roughly 600 billion dollars that Nvidia alone has generated in market capitalisation from AI chip demand, but they represent the earliest phase of an architectural transition.
Key competitors and industry players in the memristor and analog compute space include:
- Mythic AI — developing analog in-memory compute for edge AI
- Rain Neuromorphics — building brain-inspired analog AI hardware
- TSMC — researching memristor crossbar arrays, with a mixed-precision processor achieving 91.2 percent array yield and 85 percent accuracy on standard image classification benchmarks
- Samsung — also exploring memristor architectures and graphene integration roadmaps
- KAIST — a Korean research institute building memristor crossbar arrays for edge inference
Asia-Pacific handset manufacturers have committed to embedding analog compute chips in 2026 flagship devices. The technology is moving from papers to products.
Computing where nothing has computed before
The high-temperature version of the memristor opens a category of computing that does not currently exist: on-site AI inference in environments where conventional electronics cannot survive. A Venus lander equipped with memristor-based processors could analyse atmospheric samples, classify geological formations, and make autonomous decisions without transmitting raw data to Earth and waiting for instructions. A geothermal drilling system could process sensor data at depths where the surrounding rock glows red. A nuclear reactor could run diagnostic AI inside its containment vessel.
Some researchers have proposed placing data centres in space to address AI's energy demands, leveraging the vacuum of orbit for cooling and solar energy for power. The memristor inverts that problem entirely. Instead of taking data centres to space, it takes the computation to the environment where the data originates — whether that environment is the surface of Venus, the interior of a jet engine, or the core of a fusion reactor.
NASA's High Performance Spaceflight Computing processor, built by Microchip Technology, delivers 500 times the performance of current radiation-hardened space chips. But it was designed for the cold vacuum of interplanetary transit, not the furnace of a planetary surface. The memristor survives both extremes. A device rated for 700 degrees is almost indestructible at the 125-degree peaks that automotive computers routinely face, in the radiation-heavy environment of deep space, or under the thermal cycling of low-Earth orbit.
Europe's semiconductor sector has called for an immediate Chips Act 2.0 to fund next-generation manufacturing capabilities beyond conventional logic and memory. Memristor-based in-memory computing is exactly the kind of architecture such investment would support: a fabricable technology that does not depend on access to Nvidia's GPU supply chain or TSMC's most advanced logic nodes.
What still needs to happen
Yang has been careful not to oversell the timeline. Memory alone does not make a complete computer. High-temperature logic circuits must be developed and integrated alongside memristor memory to build a full system. The current devices were built by hand at sub-microscale in a laboratory. The missing component — reliable, extreme-temperature non-volatile memory — has now been demonstrated. But the path from a 700-degree proof of concept to a finished product that can be deployed on a Venus lander or inside a reactor core involves years of integration work, foundry qualification, and system-level testing.
The chip that survived temperatures hotter than lava was, as the researchers acknowledge, partly an accident of materials science. The company that will eventually sell it, however, was built with that outcome in mind.
Europe's semiconductor sector has called for an immediate Chips Act 2.0 to fund next-generation manufacturing capabilities beyond conventional logic and memory. Memristor-based in-memory computing is exactly the kind of architecture such investment would support: a European-fabricable technology that does not depend on access to Nvidia's GPU supply chain or TSMC's most advanced logic nodes.
FAQ
What is a memristor and how does it differ from conventional computer memory?
What are the most likely commercial applications for TetraMem's extreme-temperature memristor technology?
How does the graphene layer prevent the memristor from failing at high temperatures?
More in the feed
Prepared by the editorial stack from public data and external sources.
Original article