AI

Why I ditched Claude Code and Codex for Zed’s local-first AI editor

At a glance:

  • Zed pairs a Rust-native, frame-based code editor with an AI chat that can draw from local LLMs, Claude Code, Codex, and other providers to cut costs and protect privacy.
  • The tool uses CRDTs to weave AI-generated code into files as you type without overwriting human-written blocks, and it parallelizes work across CPU cores while running syntax highlighting in the background.
  • Rather than full agentic orchestration, Zed favors a hands-on workflow where AI suggests, explains, and optimizes visible code, helping users learn syntax and avoid misconfigurations.

A deliberate step back from agentic coding

Vibe coding and agentic development are here to stay, whether you reach them via Claude Code, Codex, or harnesses like Pi. The catch is that, unless you host your own local LLM models, the accelerated cycle can rack up recurring costs quickly, and sometimes you simply do not need that much automation. There is something to be said for a more traditional coding environment where AI fixes structure and expands function calls intelligently rather than steering the session from a distance. Writing code yourself—seeing the blocks your own fingers type in—tends to cement learning better than delegating the reasoning to a "personal clanker" and then reverse-engineering the result.

Zed sits closer to the hands-on end of the spectrum without abandoning modern conveniences. Its AI chat can query a multitude of providers, including locally hosted LLMs, so you can call on models for clarifications, examples, and optimizations while you write. That balance makes it well suited for learning: you produce code on one side, probe and refine it in the chat on the other, and avoid the opacity of pseudo-code conversations abstracted away in agentic harnesses. The editor also taps MCP servers to pull in on-demand knowledge and leans on language servers to keep pace with syntax changes, all layered under a robust theming engine.

Performance by design, not by wrapper

Many desktop apps lean on Electron wrappers that deliver cross-platform convenience at the cost of sluggishness. Zed is written in Rust from the ground up, which alone makes it snappier, but the team also rethought how IDEs render work. Instead of treating the coding environment like a web page, Zed treats it like frames in a video game: work is aggressively parallelized across CPU cores, heavy tasks pull every available resource, and background jobs handle syntax highlighting without blocking the foreground. The macro view—render once, show later—is similar in spirit to common approaches, but the outcome feels markedly different in daily use.

Beyond raw speed, Zed uses AI to predict the contents of your next code block as you type, moving past simple tab-autocomplete of known strings. It can insert AI-generated suggestions at whatever pace you code, relying on CRDTs to reconcile changes without overwriting human-authored blocks (or vice versa). The result feels like a best-of-both-worlds compromise: hands-on experience to codify learning, plus AI assistance that enhances rather than replaces your workflow.

Learning in the open, not in the shadows

Zed’s approach to AI differs from agent-centric tools such as Antigravity. It does not push full delegation; instead, it lets LLMs ideate, create, and fix code from within the visible file. That visibility matters while learning Jinja, YAML, and, reluctantly, Python. One-shot solutions from Claude Code may work, but they do not necessarily build reasoning ability, whereas multi-day, agentic coddling can yield useful foundations—yet even then, seeing the syntax and receiving nudges when you typo helps establish durable underpinnings. Learning to make better tools includes learning to secure them against leaking credentials, ports, and other misconfigurations, an area where the broader software market should pay closer attention.

Access without lock-in

Even while testing Zed’s subscription plan, you can retain existing subscriptions to Claude Max and other providers. Zed supports connectors to Codex, Claude Code, and local LLM endpoints, so a multitude of downloaded models remain at your disposal. The net effect is that you do not lose access to anything by using Zed; in some ways, you gain flexibility by using those models as companions next to your code blocks rather than as opaque engines that return finished artifacts you must then audit for correctness, functionality, and security.

Choosing guidance over delegation

The core tension with agentic coding harnesses is opacity: they can do the work for you without prompting you to pay attention. If you are not a seasoned coder and want to learn, having something look over your shoulder and suggest options or corrections is more valuable than automation that finishes the job. Zed favors the former—showing the ebb and flow of code, letting AI help you code instead of doing the whole thing for you—while still keeping modern conveniences close at hand when your brain needs a helping hand.

Editorial SiliconFeed is an automated feed: facts are checked against sources; copy is normalized and lightly edited for readers.

FAQ

Which AI models and services does Zed support?
Zed can draw from a multitude of providers, including locally hosted LLMs, and offers connectors to Claude Code, Codex, and Claude Max subscriptions. Users can tap their existing provider subscriptions alongside local endpoints, giving them access to a multitude of downloaded models without losing prior investments.
How does Zed handle AI-generated code to avoid overwriting my work?
Zed uses CRDTs (Conflict-free Replicated Data Types) to insert AI-created code as you type, reconciling changes so that AI-generated blocks do not overwrite human-coded blocks (or vice versa). This allows the editor to weave suggestions into files at your own pace while preserving your authorship and intent.
What makes Zed faster than other code editors?
Zed is written in Rust from the ground up, avoiding Electron wrappers and their associated sluggishness. It parallelizes workflows across CPU cores, runs syntax highlighting in the background, and treats the coding environment like frames in a video game rather than a web page, yielding snappier day-to-day performance.

More in the feed

Prepared by the editorial stack from public data and external sources.

Original article