AI

Clicky: the AI companion that sits next to your cursor and transforms learning

At a glance:

  • Clicky is a free, open-source AI companion that sits next to your cursor and provides spoken guidance while you work
  • Powered by Claude, AssemblyAI, and ElevenLabs, it currently only works on macOS but a Windows version is coming soon
  • The tool has gone viral with 15,000 likes and nearly 3 million views on its demo video

What is Clicky?

Clicky represents a refreshing departure from the increasingly homogeneous landscape of AI tools that all seem to focus on similar productivity tasks like building presentations, writing emails, or coding assistance. While these tools are undoubtedly useful, the novelty has worn off for many users who now experience a kind of "AI fatigue." Clicky breaks through this noise by offering something genuinely different: a tiny animated AI companion that physically sits next to your cursor on your screen.

The tool was created by Farza Majeed, previously the founder of buildspace, who posted a short demo on X that quickly went viral. At the time of writing, the post has garnered 15,000 likes and nearly three million views. The demo shows a blue animated companion that moves with the cursor, can see the screen in real-time, listen to the user speak, and respond both verbally and by physically pointing to elements on the screen. This interactive, visual approach to AI assistance sets Clicky apart from traditional chatbots and productivity tools.

How Clicky Works

Under the hood, Clicky is powered by a combination of cutting-edge AI services. Claude provides the core AI intelligence, AssemblyAI handles voice transcription to convert spoken questions into text, and ElevenLabs generates the spoken responses. This combination allows Clicky to understand natural language questions, process visual context from the screen, and provide immediate, spoken answers.

The user experience is remarkably simple. Once installed and granted necessary permissions, users activate Clicky by simultaneously pressing the Control and Option keys on their Mac. At that moment, Clicky captures the screen, listens to the user's question, and responds almost immediately with a spoken answer. When the interaction is complete, the user simply releases the keys. This activation method makes using Clicky feel as natural as tapping a friend on the shoulder and asking for help, which is a significant achievement in human-computer interaction.

The Learning Revolution

Clicky excels at solving one of the most persistent challenges in learning new software: the constant context switching between tutorials and the actual application. Traditional learning methods involve pausing videos, switching between windows, reading text instructions, and trying to match steps to what's on screen—a process that can be frustrating and inefficient.

With Clicky, this entire paradigm shifts. Instead of leaving the application to seek help, users can simply ask questions while staying within the software they're learning. For example, when working with CapCut and trying to understand keyframes, a user can open the software, look at the timeline, and ask out loud, "How do I add a keyframe here?" Clicky sees the timeline, understands the cursor position, and provides spoken guidance while pointing to the exact buttons that need to be pressed. This creates a seamless learning experience where the AI companion is present inside the application, providing real-time assistance as needed.

Creator Vision and Open Source Philosophy

Farza Majeed demonstrated Clicky's potential by using it to learn DaVinci Resolve, a complex video editing software. This practical application highlights how Clicky can make daunting software more approachable by providing contextual, visual guidance. The fact that Majeed chose to open-source the project just a day after posting the demo speaks to his commitment to making this innovative tool accessible to everyone.

The decision to open-source Clicky has already sparked a community of developers who are creating their own versions, particularly for platforms not yet officially supported. While Clicky is currently only available on macOS, Majeed has confirmed that a Windows version is in development. The community-built versions appearing on GitHub and X suggest that the Windows release may come sooner than expected, demonstrating the power of open-source development to accelerate innovation and platform expansion.

Why Clicky Stands Out

In a market saturated with AI tools that often feel like incremental improvements on existing concepts, Clicky represents a true paradigm shift. Its value lies not in performing tasks for users, but in enhancing their ability to learn and navigate complex software through direct, visual interaction. This "learning by doing" approach aligns with how most people actually acquire skills—through experimentation, immediate feedback, and contextual guidance.

The tool's viral success indicates that it has tapped into a genuine need that many users didn't even realize they had. As the author notes, "the bar for making someone stop and pay attention to anything AI has gotten absurdly high," yet Clicky managed to break through precisely because it offers something fundamentally different: an AI companion that doesn't just respond to text, but sees what you're looking at and guides you through it visually. This combination of visual context, natural language interaction, and seamless integration into the workflow makes Clicky not just another AI tool, but potentially a new category of human-computer interaction.

The Future of AI Companions

Clicky's success raises intriguing questions about the future of AI companions and their potential to transform how we interact with technology. As the tool evolves and expands to more platforms, we may see a shift toward more embodied AI that can understand and interact with our digital environments in increasingly sophisticated ways.

The open-source nature of Clicky also suggests a future where such tools become more customizable and community-driven, allowing users to tailor the AI companion to their specific needs and preferences. As more developers contribute to the project, we can expect to see additional features, improved accuracy, and broader platform support. For now, Clicky stands as a testament to the power of innovative thinking in the AI space—a simple yet profound idea that has the potential to change how we learn and work with software.

Editorial SiliconFeed is an automated feed: facts are checked against sources; copy is normalized and lightly edited for readers.

FAQ

What is Clicky?
Clicky is a free, open-source AI companion that sits next to your cursor on your screen. It can see what you're looking at, listen to your questions, and provide spoken answers while pointing to elements on your screen. It's designed to help you learn software by providing real-time, contextual guidance without having to switch between applications.
How does Clicky work?
Clicky is powered by Claude for AI intelligence, AssemblyAI for voice transcription, and ElevenLabs for text-to-speech. On macOS, users activate it by pressing Control and Option keys simultaneously. Clicky then captures the screen, listens to the question, and responds immediately with a spoken answer. When done, users release the keys. It works by combining visual context with natural language processing to provide relevant guidance.
Is Clicky available on Windows?
Currently, Clicky is only available on macOS. However, the creator Farza Majeed has confirmed that a Windows version is in development. Community-built versions for Windows have already started appearing on GitHub and X, suggesting that an official Windows release may come soon. The open-source nature of the project has accelerated platform expansion beyond the initial macOS-only offering.

More in the feed

Prepared by the editorial stack from public data and external sources.

Original article