Vibe coding is incredible.
You can ship a full SaaS product in a weekend. Features that used to take a senior dev three days now take three hours. The speed is real.
But here is what nobody tells you when they post their “I built this in 2 hours” thread on X.
Speed without security is just a faster way to get hacked.
Let me give you some real numbers. A December 2025 study tested five of the most popular vibe coding tools including Cursor, Claude Code, Replit, and Devin across 15 applications. The output contained 69 total vulnerabilities. Around half a dozen were rated critical. A separate Veracode study found that 45% of AI-generated code still contains classic vulnerabilities from the OWASP Top-10 list, with little improvement over two years. And just last week, a Lovable-built app leaked over 18,000 users’ data because the AI implemented the access control logic completely backwards. Authenticated users were blocked. Unauthenticated users got full access.
A human reviewer would have caught that in seconds.
The problem is not vibe coding. The problem is shipping vibe coded apps without understanding what the AI actually built.
I have been building and shipping software for years. Here are the 30 security rules I follow on every single project. No exceptions.
Authentication and Sessions
Rule 1: Set session expiration properly
JWT tokens should have a maximum life of 7 days combined with refresh token rotation. Never issue tokens that live forever.
const token = jwt.sign(
{ userId: user.id },
process.env.JWT_SECRET,
{ expiresIn: '7d' }
);
Pair this with refresh token rotation so that every time a refresh happens, the old token is invalidated. One leaked token should not last forever.
Rule 2: Never use AI-built auth
This is non-negotiable.
Authentication is the most security-critical part of your entire stack. AI generates plausible-looking auth code that has subtle logic flaws. The Lovable breach mentioned above? Classic AI auth logic inversion.
Use Clerk, Supabase Auth, or Auth0. These are battle-tested, maintained by security teams, and handle the edge cases AI will miss every single time.
Rule 3: Never paste API keys into AI chats
When you paste a key into an AI chat to get help with a bug, you have no idea where that key goes. Use environment variables always.
// Never do this
const client = new OpenAI({ apiKey: "sk-abc123yourrealkeyhere" });
// Always do this
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
Add your .env file to .gitignore before you write a single line of code. Which brings us to the next rule.
Project Setup
Rule 4: .gitignore is your first file, not your last
Before you scaffold the project. Before you install packages. Before you do anything.
Create .gitignore.
Add .env, node_modules, .DS_Store, and any local config files before your first commit. One accidental push of a .env file to a public repo and your keys are compromised within minutes. GitHub scanners and credential harvesters run constantly.
Rule 5: Rotate secrets every 90 days minimum
Set a calendar reminder. Every 90 days, rotate your API keys, database credentials, and webhook secrets. If you suspect a breach at any point, rotate immediately.
This is not paranoia. This is hygiene.
Rule 6: Verify every package the AI suggests actually exists
This one is genuinely scary and not enough people talk about it.
AI models sometimes suggest packages that do not exist. Attackers monitor for this and register those package names with malicious code inside. It is called slopsquatting, and it is a growing threat vector in 2026.
Before you run npm install on any package the AI recommends, check npmjs.com or pypi.org. Make sure the package exists, has real downloads, and has recent maintenance activity.
Rule 7: Always ask for newer, more secure package versions
When asking AI to scaffold your project, add this to your prompt: “Use the latest stable and most secure version of every package. Flag any deprecated dependencies.”
Old packages have known CVEs. AI models trained on older data will suggest older package versions by default unless you explicitly ask for newer ones.
Rule 8: Run npm audit fix right after building
Make this a habit you cannot break.
npm audit fix
Run it after every major scaffolding session. Review the output. If there are high or critical vulnerabilities that cannot be auto-fixed, address them manually before you ship anything.
Input, Data, and Queries
Rule 9: Sanitize every input. Use parameterized queries always.
SQL injection is still the most exploited vulnerability in web applications in 2026. AI-generated code frequently skips this.
// This will get you hacked
const query = `SELECT * FROM users WHERE email = '${email}'`;
// This is how you do it
const query = 'SELECT * FROM users WHERE email = $1';
const result = await db.query(query, [email]);
Never interpolate user input directly into a query. Ever. Not even once to test something quickly.
Rule 10: Enable Row-Level Security from day one
If you are using Supabase or PostgreSQL, turn on Row-Level Security before you write your first query. Not after. Before.
ALTER TABLE documents ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can only access their own documents"
ON documents
FOR ALL
USING (auth.uid() = user_id);
The Moltbook breach in February 2026 exposed 1.5 million API keys and 35,000 email addresses from a misconfigured Supabase database. The entire thing was vibe coded. The database had no proper access controls. Row-Level Security would have prevented it.
Rule 11: Remove all console.log statements before shipping
AI loves adding console.log for debugging. It will log user objects, request bodies, tokens, and internal error details.
Every one of those is a potential data leak in your server logs.
Before you ship, search your entire codebase for console.log and remove or replace with a proper logging library that has log-level controls.
# Quick way to find them all
grep -r "console.log" ./src
API and Endpoint Security
Rule 12: CORS should only allow your production domain
Never use a wildcard CORS policy in production.
// This is dangerous
app.use(cors({ origin: '*' }));
// This is correct
app.use(cors({
origin: process.env.ALLOWED_ORIGIN, // 'https://yourdomain.com'
methods: ['GET', 'POST', 'PUT', 'DELETE'],
credentials: true
}));
A wildcard means any website on the internet can make requests to your API from a user’s browser. That is not an API. That is an open door.
Rule 13: Validate all redirect URLs against an allow-list
Open redirect vulnerabilities are commonly missed in AI-generated auth flows.
const ALLOWED_REDIRECTS = [
'https://yourdomain.com/dashboard',
'https://yourdomain.com/onboarding',
'https://yourdomain.com/settings'
];
function safeRedirect(url) {
if (ALLOWED_REDIRECTS.includes(url)) {
return url;
}
return '/dashboard'; // safe default
}
If you do not validate, attackers will craft phishing links using your domain as a trusted relay.
Rule 14: Apply auth and rate limits to every endpoint including mobile APIs
AI-generated backends often protect the web routes and forget the mobile API routes entirely.
Every endpoint that touches user data needs authentication. Every endpoint that accepts input needs rate limiting. No exceptions for mobile, internal, or admin routes.
Rule 15: Rate limit everything from day one
100 requests per hour per IP is a reasonable starting point. Adjust based on your use case.
import rateLimit from 'express-rate-limit';
const limiter = rateLimit({
windowMs: 60 * 60 * 1000, // 1 hour
max: 100,
message: 'Too many requests from this IP. Please try again later.',
standardHeaders: true,
legacyHeaders: false
});
app.use('/api/', limiter);
Without rate limiting, a single attacker can enumerate your users, brute force passwords, or burn through your AI API budget in minutes.
Rule 16: Password reset routes get their own strict limit
Your general rate limit is not enough for password reset flows. These are high-value attack targets.
const passwordResetLimiter = rateLimit({
windowMs: 60 * 60 * 1000, // 1 hour
max: 3, // only 3 reset attempts per email per hour
keyGenerator: (req) => req.body.email, // limit per email, not per IP
message: 'Too many reset attempts. Please try again in an hour.'
});
app.post('/auth/reset-password', passwordResetLimiter, resetHandler);
Infrastructure and Cost Controls
Rule 17: Cap AI API costs in your dashboard AND in your code
Do both. Not one or the other.
Set a hard spend limit in your OpenAI or Anthropic dashboard. Then add a check in your code that tracks spend and returns a graceful error when the limit is hit. A single runaway loop or prompt injection attack can burn through thousands of dollars before you wake up.
Rule 18: Add DDoS protection via Cloudflare or Vercel Edge Config
Put your app behind Cloudflare on day one. It is free at the base tier and gives you DDoS protection, bot filtering, and rate limiting at the edge before traffic even hits your server.
If you are on Vercel, use Edge Config for geographic blocking and bot protection rules. This is not optional for any app with real users.
Rule 19: Lock down storage buckets
Users should only be able to access their own files. Not each other’s. Not all files in a folder. Only their own.
// Supabase storage policy example
CREATE POLICY "Users access only their own files"
ON storage.objects
FOR ALL
USING (auth.uid()::text = (storage.foldername(name))[1]);
Default storage bucket settings in Supabase are public. You have to explicitly lock them down. AI-generated code will not do this for you unless you ask.
Rule 20: Limit upload sizes and validate file type by signature
Extension validation is useless. A malicious file named payload.jpg is still a malicious file.
import fileType from 'file-type';
async function validateUpload(buffer, maxSizeMB = 10) {
// Check size
if (buffer.length > maxSizeMB * 1024 * 1024) {
throw new Error('File too large');
}
// Check actual file signature, not extension
const type = await fileType.fromBuffer(buffer);
const allowed = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf'];
if (!type || !allowed.includes(type.mime)) {
throw new Error('File type not allowed');
}
return type;
}
Payments, Email, and Webhooks
Rule 21: Verify webhook signatures before processing any payment data
A webhook without signature verification means anyone on the internet can send your server fake payment events.
// Stripe webhook verification
import Stripe from 'stripe';
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY);
app.post('/webhooks/stripe', express.raw({ type: 'application/json' }), (req, res) => {
const sig = req.headers['stripe-signature'];
let event;
try {
event = stripe.webhooks.constructEvent(
req.body,
sig,
process.env.STRIPE_WEBHOOK_SECRET
);
} catch (err) {
return res.status(400).send(`Webhook signature verification failed: ${err.message}`);
}
// Now it is safe to process
handleStripeEvent(event);
res.json({ received: true });
});
Rule 22: Use Resend or SendGrid with proper SPF/DKIM records
Do not send email from a raw SMTP connection or an unverified domain. Set up SPF, DKIM, and DMARC records for your sending domain. Without these, your transactional emails go to spam and your domain reputation gets destroyed.
Resend makes this setup genuinely easy. Do it before your first email goes out.
Permissions, Logs, and Compliance
Rule 23: Check permissions server-side. UI-level checks are not security.
This is one of the most common mistakes in AI-generated code.
Hiding a button in the UI does not prevent anyone from calling the API endpoint directly. Every permission check must happen on the server.
// This is not security
if (user.role === 'admin') {
showDeleteButton();
}
// This is security
app.delete('/api/users/:id', authenticate, async (req, res) => {
if (req.user.role !== 'admin') {
return res.status(403).json({ error: 'Forbidden' });
}
// proceed with deletion
});
Rule 24: Ask the AI to act as a security engineer and review your code
After building any feature, do this before you commit.
Paste your code and say: “Act as a senior security engineer. Review this code for vulnerabilities including injection attacks, broken authentication, insecure direct object references, missing authorization, and data exposure. List every issue with severity and a fix.”
You will be surprised what it finds.
Rule 25: Ask the AI to try and hack your app
This one sounds aggressive. It is also one of the most useful things you can do.
Say: “Act as a malicious hacker. I am going to describe my app’s architecture. Try to find ways to exploit it. Be specific about attack vectors.”
It will surface things a standard code review will miss.
Rule 26: Log critical actions
Deletions, role changes, payment events, data exports, and admin actions all need to be logged with timestamp, user ID, IP address, and what changed.
async function logCriticalAction(userId, action, metadata) {
await db.query(
'INSERT INTO audit_log (user_id, action, metadata, ip, created_at) VALUES ($1, $2, $3, $4, NOW())',
[userId, action, JSON.stringify(metadata), getClientIP()]
);
}
// Use it everywhere that matters
await logCriticalAction(user.id, 'ACCOUNT_DELETED', { email: user.email });
await logCriticalAction(user.id, 'ROLE_CHANGED', { from: 'member', to: 'admin' });
await logCriticalAction(user.id, 'EXPORT_TRIGGERED', { recordCount: rows.length });
Rule 27: Build a real account deletion flow
GDPR fines are not theoretical. Build a proper account deletion flow that removes personal data from your database, revokes all active sessions, cancels active subscriptions, and sends a confirmation email.
AI will not build this correctly unless you explicitly ask for it with every requirement spelled out.
Rule 28: Automate backups and test restoration
An untested backup is not a backup. It is a false sense of security.
Automate daily database backups. Once a month, actually restore one to a test environment and verify the data is intact and the app works. Document the restoration process so anyone on your team can do it, not just you.
Rule 29: Keep test and production environments completely separate
Separate databases. Separate API keys. Separate environment variables. Separate Stripe accounts in test mode vs live mode.
Never let test data touch production infrastructure. Never let production credentials exist in your local development environment.
Rule 30: Never let test webhooks touch real systems
Use Stripe test mode webhooks for local development and staging. Use Stripe live webhooks for production only. Use Stripe’s webhook CLI tool to forward events in development.
stripe listen --forward-to localhost:3000/webhooks/stripe
One misconfigured environment variable pointing your test server at the live Stripe webhook endpoint has already cost founders real money.
Ship Fast. Ship Secure.
Here is the reality of vibe coding in 2026.
The tools are extraordinary. The speed is real. The ability to ship a full product in a weekend is genuinely possible and genuinely impressive.
But the AI does not know your threat model. It does not know which of your users are high-value targets. It does not know that your storage bucket is wide open or that your webhook has no signature verification. It will generate code that works perfectly in a demo and has critical vulnerabilities in production.
Your job is not to write every line. Your job is to review, validate, and own everything that ships.
Thirty rules. None of them optional. All of them faster to implement upfront than to fix after a breach.
Ship fast. But ship secure.



