Claude does more for my workflow than all other ai tools combined — these 3 features are why
At a glance:
- Interactive inline visuals render directly inside the chat, built with HTML, CSS, JavaScript and SVG.
- Projects lets you upload context once and have it automatically available in every conversation.
- Paid Opus 4.6 or Sonnet 4.6 plans raise the token window to 1 million tokens, roughly a thousand pages of text.
First impressions of Claude
Nolen, a veteran writer for MakeUseOf, has been testing AI tools since 2019. When they first opened Claude, the expectation was simple: ask a question, get an answer, and move on. That narrow view quickly disappeared as they discovered features that no other chatbot seemed to offer. The author stresses that the comparison isn’t about Claude being universally better than ChatGPT, Gemini, or other rivals; it’s about three specific capabilities that streamline repetitive, knowledge‑intensive work.
The review focuses on the core Claude chatbot accessed via desktop or Claude.ai web, not the separate Claude Code product. Throughout the piece, Nolen highlights how Claude’s design choices reduce context‑switching, keep the workflow inside a single window, and let users treat the model as a persistent collaborator rather than a one‑off query engine.
Interactive inline visuals replace separate tools
Anthropic introduced a new visual layer in March 2026 that differs from the older “Artifacts” side‑panel. Instead of opening a separate pane, Claude can now generate interactive visuals inline, right between paragraphs of its response. These visuals are built with standard web technologies—HTML, CSS, JavaScript, and SVG—so they behave like miniature web apps that you can click, drag, or edit on the fly.
The distinction matters: Artifacts are permanent, shareable outputs that live in a side panel, while inline visuals are fleeting “whiteboard sketches” that appear only for the moment they’re needed. Users can still export them as SVG or HTML files, or promote them to full‑blown Artifacts if they want to keep a copy. Nolen notes that they even built a functional calculator on the fly to run cost‑comparison scenarios without leaving the chat. The result is a workflow where opening a spreadsheet, launching a design tool, or sketching on paper is no longer necessary—Claude does it all within the conversation.
Projects keep context alive across sessions
Many users treat Claude like a search engine: start a new chat, explain the problem, get an answer, and then close the window. That approach wastes time when the same project requires the same background material day after day. The Projects feature solves this by letting you create a container, upload any relevant documents—style guides, code docs, research PDFs, brand manuals—and have that context automatically loaded into every new chat launched inside the project.
For developers, this means loading a codebase’s documentation once and referencing it in every debugging session. Researchers can keep source papers and notes at hand, and marketers can have brand guidelines ready for every copy‑writing request. Paid users enjoy unlimited Projects, while free accounts can create up to five, still providing significant value. The author emphasizes that the time saved from not re‑typing or re‑uploading context adds up to several hours per month for heavy users.
Massive token windows let you dump whole documents
Claude’s context window is a critical metric for any LLM. Both free and paid users receive a 200 k token baseline, roughly equivalent to 500 pages of text in a single session. In March 2026, Anthropic lifted the ceiling for Opus 4.6 and Sonnet 4.6 plans to 1 million tokens at no extra charge. This upgrade enables users to feed entire research reports, long drafts, or multi‑chapter PDFs into the model without chopping them into smaller pieces.
The trade‑off is that larger contexts consume quota faster—each token held in memory adds weight to the usage meter. Nolen mentions disabling the extended token window for some tasks to keep costs predictable. Nonetheless, the ability to upload a 160‑page, 17 MB driving‑license test manual and have Claude retrieve specific passages instantly showcases how the huge window eliminates the “lose the thread” problem that plagues many other AI assistants.
Pricing, platform support, and availability
Claude is available on both Windows and macOS. Pricing is tiered:
- Free plan – unlimited chats with the 200 k token baseline.
- Pro plan – $17 per month, adds higher rate limits and priority access.
- Max plan – $100 per month per person, includes unlimited Projects, 1 million‑token windows, and access to the most capable Opus 4.6/Sonnet 4.6 models.
The author notes that the free tier already offers a generous token window, making Claude accessible for hobbyists and occasional writers, while power users and teams can justify the paid tiers for the productivity gains described earlier.
Why these features matter for the future of work
Nolen concludes that Claude’s three standout capabilities—inline interactive visuals, persistent project context, and a massive token window—address the friction points that often turn AI‑assisted work into extra work. By keeping visual design, data manipulation, and document reference inside the chat, Claude lets users stay in a single mental flow. The author cautions that no single tool is universally superior, but for anyone whose daily routine involves repetitive knowledge work, Claude’s unique toolbox can shave minutes—or even hours—from each session.
The piece ends with a reminder that while Claude is not the only AI worth using, its blend of web‑native interactivity and deep context handling sets a new benchmark for conversational assistants.
Technical specifications
- Operating systems: Windows, macOS
- Individual pricing: Free plan; $17 /month Pro plan
- Group pricing: $100 /month per person for the Max plan
FAQ
What are Claude’s inline visuals and how do they differ from Artifacts?
How does the Projects feature improve workflow for repetitive tasks?
What token limits does Claude offer for free and paid users?
More in the feed
Prepared by the editorial stack from public data and external sources.
Original article