Cluely AI: The Controversial Tool Redefining Digital Assistance
Cluely AI: The Controversial Tool Redefining Digital Assistance
In April 2025, a peculiar AI startup called Cluely entered the tech scene with an eyebrow-raising premise: "cheat on everything." Its 21-year-old founders, Chungin "Roy" Lee and Neel Shanmugam, raised $5.3 million in seed funding despite—or perhaps because of—their provocative approach to AI assistance. The duo's journey from being suspended at Columbia University to securing millions in Silicon Valley funding offers a fascinating glimpse into both the innovation and ethical concerns swirling around AI today.
From Campus Suspension to Silicon Valley Funding
The Cluely story begins with Interview Coder, a tool Lee and Shanmugam developed to help software engineers "cheat" on technical interviews. The project led to disciplinary action at Columbia University, where both were ultimately suspended. Rather than backing down, they doubled down on their vision, expanding it into what's now known as Cluely.
"$5 million to change the definition of the word 'cheating,'" Lee tweeted after announcing their successful funding round led by Abstract Ventures and Susa Ventures in April 2025.
Despite being just weeks old, the company claims to have already reached $3 million in annual recurring revenue—an impressive figure that suggests significant market interest despite the ethical questions surrounding the tool.
What Exactly Does Cluely Do?
At its core, Cluely operates through a hidden, "undetectable" browser window that provides users with real-time AI assistance during live interactions. According to its marketing materials, the tool can:
- Read and analyze what's on your screen
- Listen to audio from conversations or meetings
- Provide instant, context-aware responses and suggestions
The company offers a limited free version and a Pro subscription at $20 monthly (or $100 annually), targeting scenarios from job interviews and sales calls to exams and negotiations.
The launch campaign culminated in a slickly produced but controversial video showing Lee using Cluely to lie about his age and art knowledge during a date. When caught, the AI attempts to salvage the situation by suggesting flattering comments about his date's artwork.
The Reality Behind the Marketing
Multiple journalists who've tested Cluely report significant gaps between its ambitious marketing and actual performance. Victoria Song from The Verge found the tool plagued by technical issues during testing:
- Substantial latency, with AI responses taking up to 90 seconds—an eternity in real conversation
- Audio problems during video meetings
- Conspicuous user behaviors (like looking away from the camera to read suggestions) that made the tool far from "undetectable"
"It's hard to look smart when the AI can take two whole minutes to digest a conversation," Song wrote. "I'd ended up working harder to be worse at my job than I usually am."
Business Insider's testing revealed similar limitations, including factual hallucinations where the AI fabricated skills the reporter never had while missing qualifications actually listed on their LinkedIn profile.
Lee acknowledges these shortcomings, noting that Cluely is currently "in a really raw state" and that their launch video represented "a vision, not a product."
The Ethical Debate
Cluely's marketing as a tool to "cheat on everything" has ignited heated ethical debates across tech forums, educational institutions, and professional circles:
The Case for Cluely
Defenders, including its founders, argue that Cluely represents the inevitable evolution of AI assistance. They compare it to calculators, spellcheckers, and search engines—all initially derided as "cheating" before becoming standard tools.
"Using AI is just inevitable and something that we should just all embrace," Lee told Business Insider. He positions Cluely as "AI maximalism," suggesting that AI should help in every possible context where it can be useful.
The company's manifesto frames the technology in revolutionary terms: "The future will reward leverage, not effort."
The Concerns
Critics raise numerous objections to Cluely's approach:
Academic integrity: The tool potentially undermines educational assessment and genuine learning.
Professional ethics: Companies like Amazon explicitly prohibit such tools during interviews, stating they create an "unfair advantage" and make it impossible to accurately evaluate candidates.
Privacy risks: Cluely's screen and audio monitoring capabilities raise serious privacy concerns, even though the company claims it doesn't save user data.
Normalization of dishonesty: The marketing explicitly promotes deception as a virtue, potentially reshaping social norms around honesty.
Practical limitations: Even if ethically accepted, current technical constraints make the tool less useful than advertised.
Looking Forward
Whether Cluely represents the future of AI assistance or a cautionary tale remains to be seen. Its provocative approach has certainly garnered attention, but sustained success will depend on addressing both technical limitations and ethical concerns.
Lee hints at future improvements: "We've upgraded all our servers, we've optimized the algorithms, and right now it should be about three times faster." But the larger question remains whether tools designed explicitly for deception have a legitimate place in our technological ecosystem.
For now, Cluely serves as a fascinating case study in the complex interplay between technological innovation, ethical boundaries, and market incentives in the AI era.
Citations:
- TechCrunch: Columbia student suspended over interview cheating tool raises $5.3M
- The Verge: I used the 'cheat on everything' AI tool and it didn't help me cheat on anything
- Business Insider: A new AI app that helps you cheat in conversations is slick, a little creepy, and not quite ready
- PCMag: This AI Tool Helps You Cheat on Job Interviews, Sales Calls, Exams
- NDTV: AI Startup That Lets Users Cheat In Exams And Interviews Raises $5.3 Million