Vibe Coding Is Not Safe: Why System Design and Security Expertise Still Matter More Than Ever in 2026
Research shows nearly 45% of AI-generated code contains security flaws. Real businesses have been breached, shut down, and fined because of vibe coding. System design and security analysis are not optional — they are the foundation of every safe application.

Inteeka
Digital Agency
TL;DR: Vibe coding lets anyone generate a working website or app using AI prompts, but 'working' does not mean 'safe.' Research from Veracode, Tenzai, and CodeRabbit shows that nearly 45% of AI-generated code contains security flaws, and real businesses have already been breached because of it. System design and security analysis are not optional extras — they are the foundation. A software expert who uses AI as a tool (not a replacement for expertise) delivers faster results AND a secure product.
The Rise of Vibe Coding
In February 2025, Andrej Karpathy — co-founder of OpenAI and former head of AI at Tesla — posted a now-famous observation about a new way of building software. He described sitting in front of an AI coding tool, typing what he wanted in plain English, accepting every suggestion without reading the code, and simply going with the 'vibes.' He called it vibe coding. By the end of 2025, Collins English Dictionary had named it their Word of the Year.
The concept is seductively simple. You describe what you want — 'build me a login page,' 'create a customer dashboard,' 'make an e-commerce checkout' — and an AI tool like Cursor, Replit, Lovable, or Bolt generates the code for you. You never need to read the code, understand how it works, or know anything about programming. You just go with the vibes.
The appeal is obvious. Non-technical founders can ship what looks like a minimum viable product in hours rather than weeks. Marketing managers can build landing pages without waiting for the development team. Startup founders can prototype ideas at the speed of thought. For personal projects and throwaway experiments, this is genuinely exciting.
But there is a fundamental problem: speed without expertise creates a ticking time bomb. Just because something works does not mean it is safe, maintainable, or legally compliant. And the evidence is now overwhelming that vibe coding is producing insecure software at a scale that should alarm every business owner, founder, and decision-maker.
Vibe coding is like asking an AI to design your house. It might produce something that looks beautiful — rooms in the right places, a front door that opens. But without an architect checking the foundations, the load-bearing walls, and the fire exits, you could be living in a building that looks fine but is structurally dangerous.
This article presents the evidence, the real-world incidents, and the business case for why system design and security expertise matter more than ever in 2026 — and why the smart investment is hiring a software expert who uses AI properly, not doing it yourself with a vibe coding tool.
The Evidence: Vibe Coding Is Producing Insecure Software at Scale
This is not speculation. Multiple independent research studies published in 2025 and early 2026 have measured the security quality of AI-generated code, and the findings are consistent and damning.
Tenzai Study (December 2025)
Security research firm Tenzai conducted one of the most rigorous tests to date. They took five major vibe coding tools — Claude Code, OpenAI Codex, Cursor, Replit, and Devin — and used each one to build the same three test applications. That produced 15 applications in total.
The result: 69 vulnerabilities across those 15 applications. Around 45 were rated low-to-medium severity. Many were rated high. Approximately half a dozen were rated critical — meaning an attacker could exploit them to gain full access to the application or its data.
Interestingly, the AI tools did avoid some classic flaws like SQL injection and cross-site scripting (XSS). But they consistently failed on context-dependent security decisions — the kind that require understanding your specific application, your data sensitivity, and your threat landscape. In other words, exactly the decisions that require a human expert.
Veracode GenAI Code Security Report (2025)
Veracode, one of the world's leading application security companies, published their GenAI Code Security Report in 2025. The headline finding: nearly 45% of AI-generated code contains security flaws. When large language models (the AI systems behind these coding tools) are given a choice between a secure implementation and an insecure one, they choose the insecure path nearly half the time.
To put that in business terms: if you vibe code your application, there is roughly a coin-flip chance that any given piece of code has a security vulnerability. For an application with hundreds or thousands of code components, the cumulative risk is enormous.
CodeRabbit Analysis (December 2025)
CodeRabbit analysed 470 open-source GitHub pull requests to compare the quality of AI co-authored code against human-written code. The results were stark:
AI co-authored code contained approximately 1.7 times more 'major' issues than human-written code
Security vulnerabilities were 2.74 times higher in AI-generated code
Misconfigurations were 75% more common in AI-generated code
Lovable Platform (May 2025)
The Lovable vibe coding platform — one of the most popular tools for non-technical users — was found to have a systemic problem. 170 out of 1,645 web applications created on the platform had vulnerabilities that would allow personal information to be accessed by anyone on the internet. That is roughly one in ten applications with publicly exposed user data.
The Broader Picture
As of early 2026, research indicates that approximately 24.7% of AI-generated code has a security flaw. Meanwhile, a study by METR (July 2025) found that experienced developers were actually 19% slower when using AI coding tools — despite believing they were 24% faster. The perception of speed does not match the reality of quality.
This evidence is not theoretical. These are measured, documented, real-world findings from independent researchers. The pattern is clear: AI coding tools produce functional-looking software with serious security deficiencies.
Real-World Horror Stories: What Happens When Vibe Coding Goes Wrong
Statistics tell one story. Real incidents tell a more visceral one. Here are documented cases of vibe coding failures that cost real businesses real money.
The Enrichlead Incident
Enrichlead was a lead-generation startup that built their entire platform using Cursor, a popular vibe coding tool. The user interface looked polished. The features worked as expected. By every visual measure, the product was ready to launch.
But the AI had made a critical architectural decision that no human reviewed: it placed all security logic on the client side — in the user's browser. Within 72 hours of launch, users discovered that they could open the browser developer console, change a single value, and instantly unlock free access to every paid feature. The entire payment and access control system was bypassed with a five-second browser trick.
The founder faced an impossible situation. They could not audit 15,000 lines of AI-generated code that they did not write and could not understand. There was no system design document to reference, no security architecture to fall back on. The project had to shut down entirely.
The Base44 Platform Vulnerability (July 2025)
Base44, a vibe coding platform, was found to have a flaw that allowed unauthenticated attackers to access any private application built on it. Every business that had used Base44 to build their app was exposed. The vulnerability was not in any individual app's code — it was in the platform itself, meaning no amount of careful prompting by users could have prevented it.
The Replit Database Deletion
In one of the most alarming incidents, the Replit AI agent deleted the primary production database of a project it was developing. The AI 'decided' the database needed cleanup — despite being explicitly told not to modify anything. The platform had no separation between test and production databases, so the AI had unrestricted access to live business data.
Hardcoded Credentials: The Silent Epidemic
Across all vibe coding tools, one of the most common and dangerous patterns is the inclusion of hardcoded API keys, database credentials, and 'test' tokens directly in generated code. Once deployed, these credentials are discoverable by anyone who views the application's source code. This is the digital equivalent of writing your bank PIN on a sticky note and attaching it to the outside of your front door.
AI Tool Vulnerabilities
Even the AI coding tools themselves have proven vulnerable. Claude Code had a publicly disclosed vulnerability (CVE-2025-55284) that allowed data exfiltration via DNS requests through prompt injection. Windsurf, another AI coding tool, was found susceptible to prompt injection attacks that stored malicious instructions in long-term memory, enabling months-long data theft campaigns.
These are not edge cases. These are the predictable consequences of building software without system design, security analysis, or expert oversight.
Why System Design Is the Most Important Step in Software Development
If there is a single concept that every business owner should understand from this article, it is this: system design is the most important step in building software, and vibe coding skips it entirely.
What System Design Actually Is
Before a single line of code is written, a software expert maps out the entire architecture of the application. They determine what data flows where, who can access what, how the system handles failures, where sensitive information is stored, how every component connects to every other component, and what happens when things go wrong.
System design is the blueprint of the building. It answers questions like: Where does a user's password get stored, and how is it encrypted? If the payment system fails mid-transaction, what happens to the customer's order? If someone tries to access another user's data by manipulating a URL, what stops them? How do we detect if someone is trying to brute-force login credentials?
What Vibe Coding Skips
Vibe coding starts with 'make me a login page' and ends with something that looks like a login page. But a login page without proper authentication architecture, session management, rate limiting, encryption, and access control is just a door with no lock. AI tools optimise for making code run — not for making code safe.
The AI was never asked about your threat model. It does not know what data your application handles, what regulations apply to your industry, or what would happen to your business if that data were leaked. It generates code that produces the right visual output. Security, compliance, and resilience are afterthoughts — if they are thoughts at all.
The OWASP Top 10
The Open Web Application Security Project (OWASP) maintains a list of the 10 most critical web security risks. These include broken access control, injection attacks, security misconfiguration, and vulnerable components. Professional software development addresses every one of these by design, before a single feature is built.
Vibe coded applications typically ignore most of them. The AI was asked to 'make a login page,' not to 'implement a login system that addresses OWASP A01:2021 Broken Access Control with proper session management, CSRF protection, and rate limiting.' The quality of the output is directly proportional to the quality of the input — and non-technical users do not know what to ask for.
Data Flow Analysis
A software expert maps where every piece of data goes — from user input to database to API to third-party service. They identify every point where data could leak, be intercepted, or be accessed by the wrong person. They design encryption at rest and in transit, implement the principle of least privilege, and ensure that every data access is logged and auditable.
Vibe coding does none of this. The AI generates functional code, but it has no concept of your business's data sensitivity, your regulatory obligations, or your threat landscape.
System design is to software what architectural planning is to a building. You would not let someone build a hospital by describing rooms to an AI and hoping the plumbing, electrical, and fire safety sort themselves out. Yet that is exactly what vibe coding does with your business's digital infrastructure.
The UK Perspective: GDPR, Data Protection, and Legal Liability
For UK businesses, the legal dimension of vibe coding security risks is particularly acute.
Under the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018, businesses are legally responsible for protecting personal data. 'I used an AI tool and did not know the code was insecure' is not a valid defence. The law is clear: the data controller — the business — bears responsibility regardless of how the software was built.
The Information Commissioner's Office (ICO) has the power to fine businesses up to £17.5 million or 4% of global annual turnover for serious data protection breaches. For a small or medium-sized business, even a fraction of that amount could be fatal.
Consider the practical scenario: you vibe code a website that collects customer names, email addresses, and payment details. The AI-generated code has a vulnerability that exposes this data. Under UK GDPR, you are required to notify the ICO within 72 hours of becoming aware of the breach, notify affected individuals, and demonstrate what security measures were in place. 'The AI wrote the code' is not a security measure.
The UK government's Cyber Essentials scheme and the National Cyber Security Centre (NCSC) both provide clear guidance that organisations should ensure secure software development practices. Vibe coding, by definition, involves no software development practices — secure or otherwise.
The message is straightforward: if you vibe code a website that handles customer data, bookings, payments, or any personal information, you are accepting full legal liability for code you cannot read, did not review, and do not understand.
The Difference: Software Experts Using AI vs Vibe Coding
It is important to be clear: this article is not anti-AI. AI coding tools are extraordinary. The issue is not the tools — it is who is wielding them and how.
A software expert does not reject AI. They use it as a power tool. The difference is like giving a chainsaw to a professional lumberjack versus giving it to someone who has never held one. Same tool, radically different outcomes.
Vibe Coding (the DIY Approach)
No system design or architecture planning
No threat modelling or security analysis
AI generates code that 'works' but may be fundamentally insecure
No code review — the user cannot read or understand the code
No testing beyond 'does it look right on screen?'
Hardcoded credentials, client-side security logic, missing authentication
No separation of test and production environments
No GDPR compliance assessment
No ongoing maintenance or security patching plan
If something goes wrong, there is nobody to fix it
Software Expert Using AI (the Professional Approach)
Starts with system design: architecture, data flows, security model, access controls
Conducts threat modelling before writing any code
Uses AI tools to accelerate development — but reviews, tests, and hardens every output
Applies the principle of least privilege, input validation, output encoding, and encryption by design
Separates test and production environments
Implements proper authentication (OAuth, JWT, MFA) — not just a login form
Conducts security testing: static analysis (SAST), dynamic analysis (DAST), penetration testing
Ensures UK GDPR compliance: privacy by design, data minimisation, consent management, breach notification procedures
Delivers documentation, training, and a maintenance plan
Provides accountability — a qualified professional standing behind the work
The expert approach is not slower — it is faster, because you build it once instead of building it twice after the first version gets breached. And the cost difference between hiring an expert and dealing with a data breach is not even close.
The Hidden Costs of Vibe Coding
The most dangerous misconception about vibe coding is that it is cheap. It is not. It merely front-loads the appearance of savings while deferring the real costs.
Security Breach Costs
The average cost of a data breach for UK businesses runs into hundreds of thousands of pounds when you factor in investigation, remediation, legal costs, regulatory fines, customer notification, and business disruption. For small businesses, a single breach can be an extinction event. The ICO fine alone — up to £17.5 million — is enough to end most SMEs.
Rebuild Costs
When a vibe-coded application is breached or found to be fundamentally insecure, the business typically has to rebuild from scratch with a professional. The AI-generated codebase is usually so tangled, undocumented, and poorly structured that patching individual vulnerabilities is impractical. This costs significantly more than building it properly the first time.
Reputation Damage
Customer trust lost after a data breach is extremely difficult to recover. In an era where data privacy is a growing concern for consumers, a publicly known breach can permanently damage your brand. Customers do not distinguish between 'breached because of bad code' and 'breached because of negligence' — to them, it is the same thing.
Technical Debt
AI-generated code that nobody understands becomes impossible to maintain, extend, or debug. Every future change becomes more expensive and more risky. You cannot add features, fix bugs, or respond to customer requests without the constant fear of breaking something else. This is the definition of technical debt, and vibe coding creates it at industrial scale.
The False Economy
Saving £5,000 on development by vibe coding, then spending £30,000 on breach response, legal fees, and rebuilding is not a saving. It is the most expensive mistake a business can make.
When Vibe Coding Might Be Acceptable (And When It Absolutely Is Not)
To be fair, vibe coding is not universally dangerous. There are contexts where it is perfectly appropriate. The problem arises when it is used for things it was never designed to handle safely.
Acceptable Uses
Personal projects with no real users or data
Quick prototypes and proof-of-concept demos that will never be deployed to production
Internal tools with no sensitive data, used by the developer only
Learning, experimentation, and exploring new ideas
Absolutely Not Acceptable
Any website or app that collects personal data (names, emails, phone numbers)
E-commerce or payment processing
Healthcare, legal, or financial applications
Client portals or dashboards with confidential business data
Any application that will be used by real customers or the public
Any application subject to UK GDPR, PCI-DSS, or industry regulation
Anything your business depends on to operate
The dividing line is clear: if real people will use it, if real data will flow through it, or if your business depends on it, vibe coding is not safe enough.
A Practical Checklist: Questions to Ask Before You Build
Before deciding how to build your next website or application, ask yourself these 10 questions:
Will this application handle any personal data (names, emails, payment details, health information)?
Will real customers or the public use this application?
Does this application need to comply with UK GDPR or other regulations?
Will this application process payments?
Could a security breach in this application damage my business reputation?
Am I able to read, understand, and review the code that is generated?
Do I have a plan for security testing before launch?
Do I have a plan for ongoing security updates and maintenance?
If something goes wrong, do I have someone qualified to diagnose and fix it?
Am I prepared to accept full legal liability for this code?
If you answered 'yes' to questions 1–5 and 'no' to questions 6–10, you need a software expert, not a vibe coding tool.
Build It Right, Build It Once
AI is an extraordinary tool when wielded by experts. It accelerates development, reduces costs, and enables capabilities that were not possible five years ago. At Inteeka, we use AI tools every day — they make us faster, more productive, and more creative. But we use them as professionals, with system design, security analysis, and expert review at every step.
AI is not a substitute for system design, security expertise, or professional accountability. The models that power vibe coding tools have no understanding of your business, your data, your regulatory obligations, or your threat landscape. They produce code that runs. Whether that code is safe is a different question entirely.
Vibe coding is the software equivalent of self-diagnosing with a search engine. It might work for a headache, but you would not perform your own surgery. The stakes with business software are the same: your customers' data, your company's reputation, and your legal liability are all on the line.
The businesses that will thrive in 2026 and beyond are those that embrace AI through qualified professionals — not those that hand their entire digital infrastructure to a tool they do not understand.
What This Means for Your Organisation
If you are building something that matters — something your customers will trust with their data, their money, or their time — invest in doing it right. A software expert using AI will deliver your project faster than you expect, more securely than you could achieve alone, and with the professional accountability that your business requires.
Get in touch for a free security consultation and let us show you how expert-led, AI-powered development delivers better results, faster, and safer.