If you have been following tech news this week, you probably already caught a headline or two about Vercel getting hacked. But honestly, Vercel is just the latest name in a list that has been quietly growing for over a year now. Lovable, GitHub Copilot, Context.ai, the tools that millions of developers trust with their code, credentials, and entire deployment pipelines, have been getting hit one after another.
And the worst part? Most of the people actually using these platforms have no idea how bad things have gotten.
This article is a full breakdown. We are going to walk through exactly what happened, who is behind it, what data was actually at risk, and what you should be doing right now if you use any of these tools. Ekdum seedha baat, no jargon drama.
Let us start with the freshest story first because this one is literally still developing as of April 20, 2026.
Vercel, the company behind Next.js and one of the most widely used frontend deployment platforms in the world, confirmed on April 19, 2026 that it had suffered a security breach. In an official security bulletin, the company stated that unauthorized access was gained to “certain internal Vercel systems,” and that a limited subset of customers was directly impacted.
Vercel CEO Guillermo Rauch followed that up with a more candid post on X, explaining what actually happened. A Vercel employee was using a third-party AI tool called Context.ai. That tool got breached. And through that breach, attackers were able to access the employee’s Google Workspace account, and from there, they escalated access into Vercel’s internal environments.
The technical trail here is frankly wild, and it is worth understanding fully because this same pattern is going to keep happening.
Cybercrime intelligence firm Hudson Rock dug into the origins of the breach and found that a Context.ai employee had been infected with Lumma Stealer malware back in February 2026. How did they get infected? Browser history logs from the compromised machine showed the person had been searching for and downloading Roblox “auto-farm” scripts and game exploit executors. These types of downloads are well-known delivery mechanisms for Lumma Stealer.
That single infection quietly harvested a massive haul of credentials from the machine: Google Workspace logins, Supabase keys, Datadog access, and importantly, OAuth tokens tied to Context.ai’s Google Workspace OAuth app. The compromised employee was a core member of the “context-inc” team on Vercel, meaning their credentials had direct administrative access to Vercel environment settings.
Context.ai itself disclosed a March 2026 AWS breach where it detected and blocked unauthorized access. But what came out later is that the attacker had also gotten OAuth tokens for some of its consumer users during that period. One of those users was a Vercel employee who had signed up for Context.ai using their Vercel enterprise account and had granted “Allow All” permissions.
Once the attacker had valid OAuth tokens, they did not need to crack any passwords. They simply replayed the stolen tokens and walked into the Vercel employee’s Google Workspace as if they owned it. From there, they enumerated internal Vercel environments and accessed environment variables that were not marked as “sensitive” in Vercel’s system, which meant they were not encrypted at rest and could be read directly.
According to Vercel’s official guidance, environment variables that were marked as sensitive are stored in a way that prevents them from being read, and there is currently no evidence those values were accessed. But the non-sensitive variables could have contained API keys, database credentials, and deployment tokens depending on how individual teams had configured their projects.
A few hours before Vercel even went public with their confirmation, a threat actor using the name “ShinyHunters” posted on BreachForums offering what they claimed was Vercel’s internal data for $2 million. The claimed dataset reportedly includes access keys, source code, employee account credentials, API keys, NPM tokens, GitHub tokens, and screenshots of internal enterprise dashboards.
As proof, they shared a text file with 580 Vercel employee records containing names, email addresses, and account activity timestamps. They also separately sent a ransom demand of $2 million via Telegram to Vercel, though the company has not publicly confirmed those negotiations.
Here is where it gets complicated: members of the actual ShinyHunters group, the same crew behind the massive 2024 Ticketmaster breach, have denied involvement to BleepingComputer. The group’s name might just be borrowed by whoever actually did this to add credibility. So whether this is the real ShinyHunters or an impersonator is still unclear. What is not unclear is that Vercel got hit, the attackers demonstrated real internal access, and the data is allegedly sitting on the dark web right now.
Vercel has brought in Google-owned Mandiant for incident response, engaged law enforcement, and confirmed that Next.js and its broader open-source supply chain were not affected. The company also confirmed all services remain operational.
If you or your team hosts anything on Vercel, here is what the company recommends doing immediately:
This is not optional if you run anything in production on Vercel. Do it today.
Now let us rewind a bit, because the Vercel story does not exist in isolation. The AI tools security crisis has been building for a while, and no platform illustrates it more clearly than Lovable.
Lovable is a “vibe coding” platform that lets you build full-stack web applications using natural language prompts. It is genuinely impressive at what it does. The problem is that “impressive-looking” and “actually secure” are two very different things, and Lovable has had a rough year proving that gap.
The big one is a vulnerability catalogued as CVE-2025-48757. It was first discovered on March 20, 2025 by a researcher named Matt Palmer while looking at a Lovable-built app called Linkable, a site for generating profiles from LinkedIn data.
What Palmer found was simple but catastrophic. Lovable-generated apps that used Supabase for their backend had insufficient or missing Row Level Security (RLS) policies. In plain English: the database was not checking whether the person asking for data was actually supposed to see it.
Because Lovable apps expose a public “anon key” on the client side (this is how Supabase works by design), anyone could take that key and query the Supabase database directly. Without proper RLS, you could dump entire tables, full user lists, payment records, API keys, without ever logging in. All you had to do was swap the intended query with one of your own.
Palmer disclosed this to Lovable on March 21, 2025. The company acknowledged receipt but never gave a substantive response. Two months went by. Then on April 14, 2025, a Palantir engineer independently found the exact same vulnerability and tweeted about it publicly, showing real examples of sensitive data being pulled from live Lovable apps, personal debt amounts, home addresses, API keys, the works.
The story did not go public formally until Palmer published his full disclosure on May 29, 2025, after the 45-day disclosure window expired with no meaningful fix from Lovable.
A subsequent scan of 1,645 apps built with Lovable found that 170 of them had critical security flaws along these same lines.
In February 2026, almost a year after the RLS vulnerability was first reported, researcher Taimur Khan found 16 vulnerabilities in a single Lovable-hosted app that was showcased on Lovable’s own Discover page with over 100,000 views and around 400 upvotes. Six of those vulnerabilities were classified as critical.
The app turned out to be an EdTech platform. Due to a logic error in the AI-generated authentication code, the kind of mistake that a human reviewer would likely have caught immediately, the access control was completely inverted. Authenticated users were being denied while unauthenticated visitors had full access.
The result: anyone could access all 18,697 user records. That included 4,538 student accounts, over 10,500 enterprise users, and 870 records with fully exposed PII. Students from UC Berkeley and UC Davis were in that dataset, along with potentially minors from K-12 schools. Attackers could also send bulk emails through the platform, delete user accounts, change credit balances, and grade student test submissions.
Khan reported this to Lovable via their support channel. His ticket was reportedly closed without a response. After he went public, Lovable’s CISO said the company had only received a proper disclosure report on the evening of February 26 and acted “within minutes.” They also noted that the vulnerable database was not hosted by Lovable and that the app included code not generated by Lovable.
Khan’s response was pointed: “If Lovable is going to market itself as a platform that generates production-ready apps with authentication included, it bears some responsibility for the security posture of the apps it generates and promotes.”
Beyond the data exposure issues, Lovable ran into another problem entirely: criminals discovered it was an excellent tool for building phishing websites.
Security firm Guardio Labs published research in April 2025 coining the term “VibeScamming” to describe how threat actors were using Lovable to spin up complete phishing campaigns. The platform’s capabilities, easy web app creation, built-in hosting, no guardrails on what you could build, lined up perfectly with what scammers need. Guardio tested Lovable against other AI platforms and found it scored 1.8 out of 10 on exploitability, meaning it was the most exploitable of all platforms tested.
Proofpoint researchers reported in August 2025 that since February of that year, they had observed “tens of thousands” of Lovable URLs being used in malicious phishing campaigns distributed through email. These campaigns impersonated brands like Microsoft and UPS, used CAPTCHA pages for filtering, and posted harvested credentials directly to Telegram. A single campaign in February 2025 alone sent “hundreds of thousands” of phishing messages and hit more than 5,000 organizations.
Lovable has since taken down multiple phishing clusters after being notified, but the fundamental problem, that anyone can build a convincing credential-harvesting site in minutes with no guardrails, remains a live issue.
Now for another one that did not get the attention it deserved. GitHub Copilot, the AI coding assistant used by tens of millions of developers, had a critical vulnerability tracked as CVE-2025-59145 with a CVSS score of 9.6 out of 10.
The exploit, named “CamoLeak,” used prompt injection techniques to manipulate how Copilot processes information. Here is how it worked: an attacker submits a pull request to a repository. Inside that pull request, they hide malicious instructions inside GitHub’s invisible markdown comment syntax, comments that do not render in the normal web interface, so human reviewers see nothing suspicious. But when GitHub Copilot Chat reviews the pull request, it ingests the raw text and treats those hidden instructions as legitimate commands.
The AI then follows those instructions, exfiltrating source code, API keys, and cloud secrets to a server the attacker controls, all without executing any visible malicious code. No malware. No suspicious binary. Just the AI assistant doing exactly what it was secretly told to do.
GitHub patched this in August 2025 by disabling image rendering inside Copilot Chat, but the vulnerability was not publicly disclosed until October 2025. That two-month gap, where the patch was quiet and no one outside GitHub knew to worry, is worth noting.
That same year, security researchers published research under the name “IDEsaster” revealing a systemic vulnerability across essentially every AI coding tool on the market. GitHub Copilot, Cursor, Claude Code, JetBrains Junie, all of them were found vulnerable to a three-stage attack chain: Prompt Injection, then AI Tools, then Base IDE Features.
The key insight was that attackers do not need a bug in the AI extension itself. They can inject malicious instructions through things like rule files, MCP servers, deeplinks, or even file names. Once the AI agent processes those instructions, it uses legitimate IDE features, editing config files, creating JSON files that reference remote schemas, manipulating VS Code settings, to leak data or achieve remote code execution.
The result was 24 CVEs being assigned across multiple vendors. AWS, GitHub, and others issued security advisories. Some vendors patched. Others said the behavior was the user’s responsibility.
If you zoom out, there is a clear pattern connecting all of these incidents, and it is worth naming directly.
Every time an AI tool integrates with your development environment, it inherits your permissions and your trust. GitHub Copilot can read your private repos because you gave it access. Context.ai had access to Vercel infrastructure because an employee signed up with their enterprise account and clicked “Allow All.” Lovable generates apps with your database credentials baked in.
The attack surface for any individual developer or organization is now much larger than just their own code. It includes every AI tool they use, every OAuth grant they have approved, and every third-party platform that those tools connect to. As the Vercel breach chain showed, an employee at a vendor you never heard of downloads a Roblox cheat script, and two months later your company’s internal systems are being auctioned on BreachForums for $2 million.
Lovable’s repeated security failures point to something uncomfortable about the “vibe coding” movement. When an AI generates code that looks correct, executes successfully, and produces a polished UI, developers often ship it without the kind of security review that traditional development would require. The code works. Why dig deeper?
Researcher Taimur Khan coined a term for the exploitation of this gap: “vibe hacking.” Because AI-generated code defaults to functionality over security, attackers who understand this pattern can predict where the weak spots will be. They do not need to be sophisticated hackers. They just need to know that RLS is probably misconfigured, that authentication logic is probably inverted somewhere, and that the anon key is definitely in the client-side code.
Georgia Tech’s Systems Software and Security Lab has been tracking this under the “Vibe Security Radar” project. In January 2026, six CVEs were directly attributed to AI-generated code. By March 2026, that number had climbed to 35 in a single month. The trend is moving in the wrong direction fast.
The Vercel breach is a masterclass in how OAuth grants have become the dominant initial access vector for enterprise breaches. OAuth tokens persist. They often carry broad scopes. They are rarely audited. And they are stored in browsers where infostealers can grab them alongside passwords and session cookies.
IBM’s 2026 X-Force report found over 300,000 ChatGPT credentials discovered in infostealer malware during 2025. That number reflects a broad pattern: the same malware families that have been stealing banking passwords for years are now specifically targeting AI tool credentials because those sessions carry access to so much more than just a chatbot account.
India has a massive developer community that has embraced AI coding tools fast, probably faster than most of the world. Platforms like Lovable and Vercel are popular with Indian startups, freelancers, and engineering students building side projects. So let us talk about concrete steps.
One thing worth calling out directly: every platform in this article, Lovable, GitHub, and to some extent Vercel, has at various points pointed fingers at the end user when vulnerabilities came to light. “The security scanner warned you.” “It is at the user’s discretion to implement recommendations.” “GitHub Copilot users are responsible for reviewing AI suggestions.”
These statements are not entirely wrong. Users do have responsibility. But when a platform markets itself as production-ready, showcases apps to 100,000 visitors, and generates code that looks secure without being secure, the “it is the user’s fault” response feels like a convenient exit.
The security community has been pushing back on this, and it is a conversation worth having loudly. AI platforms that lower the barrier to building apps also lower the barrier to building insecure apps. If these companies want to own the upside of “anyone can build software now,” they should also own some portion of the downside.
The AI tools ecosystem is moving at a speed that the security infrastructure around it simply has not kept up with. Vercel getting hit via a Roblox exploit on a vendor’s employee laptop is not a freak event. It is a preview of the kind of attack chain that is going to become normal as more critical infrastructure runs through AI tool integrations.
For developers, the lesson is not “stop using AI tools.” These tools are genuinely useful and the productivity gains are real. The lesson is to treat AI-generated code and AI-connected infrastructure with the same scrutiny you would give anything else in your production environment. Yeh tools intelligent hain, but they are not your security team.
Stay updated, rotate your credentials, audit your OAuth grants, and for the love of everything, do not download Roblox exploits on your work laptop.
This article will be updated as the Vercel investigation continues. Last updated: April 20, 2026.
Every result season, thousands of AKTU students ask the same questions. Will I get grace…
You are watching IPL. A quick delivery hits the pad. The on-field umpire raises the…
The CBSE Class 10 Result 2026 has brought great news for students and parents in…
It's that time of year again, the one where parents refresh result portals every two…
The Story in One LineAt Black Hat USA 2025, two security researchers stood on stage…
The Telangana State Board of Intermediate Education is expected to declare the TS Inter Result…