67,000 Lines of SaaS Without a CS Degree
I'm going to say the thing out loud: I use AI to help me write code. My JavaScript is honestly not great. I learned HTML from freeCodeCamp in 2020, JavaScript from Udemy, and got my real education from a developer who'd been building software since Blackberry apps were cutting edge. He'd sit me down, point me at files, and walk me through what to write. When he got tired, I'd be his hands. Then he'd send me off on debugging tasks and data manipulation work and let me figure it out.
That was my CS degree. Debugging, DevOps, data extraction and transformation, understanding how systems fit together, knowing what to look at when something breaks.
The Line Is Moving
There's a term going around — vibe coding. Prompting AI, accepting code you don't understand, shipping it. The output works until it doesn't, and when it breaks you don't know why.
Here's the thing people miss about that conversation: where the line sits between "understand every character" and "understand the problem" is not fixed. It moves with the models.
A year ago, you had to steer everything. Review every function, catch every edge case. The models would write code that looked right and wasn't. You needed real code literacy just to use them safely.
Today, if I describe a problem clearly — what's actually happening, what should be happening, what the constraints are — Claude will often explore solutions I wouldn't have reached on my own. It'll suggest architectural approaches I hadn't considered. It'll refactor something I described poorly into something cleaner than what I had in my head.
Two years ago I had to spot every error. Now I describe the error and the model fixes it. That's still a skill — describing problems clearly is harder than it sounds. But it's a different skill than writing JavaScript from scratch.
I just try things and see if the model can handle it. If it can with a clear prompt, I let it run. If it starts going sideways — which it does whenever it's missing context about the system, auth flows, data relationships, platform constraints — I take over. That's the whole game.
The Oase Build
Oase is a management platform for design agencies. My co-founder Louis runs a design agency in the Netherlands — six years, real clients, real team. He was paying hundreds of euros a month across ClickUp, Typeform, Slack, Google Drive, and a handful of other tools. None of them were built for how design agencies actually work. He was spending more time configuring ClickUp than doing design work. Paying $80/month for Typeform because it looks better than Google Forms — and for a design agency, that matters.
Louis actually vibe-coded the first version himself in Claude Code. No coding background. He got a working prototype — buggy, local-only, but it proved the concept. Then he showed it to me and said if I helped take it to production we'd be partners. I looked at it and I could see what it needed — multi-tenancy, proper auth, a real data model, deployment. Not because I could write better JavaScript than Claude gave him, but because I'd spent four years building on these platforms and I knew what production meant.
Here's what it looks like now:
- 67,000 lines of TypeScript
- 86 Prisma database models
- 312 API route files
- 220 React components
- 24 email templates
- 95 in-app documentation pages
- 11 Playwright end-to-end test files
- 5 Vercel cron jobs
The stack: Next.js 16, PostgreSQL on Supabase via Prisma, Firebase Auth and Storage, Stripe for billing, Claude for the AI assistant, Resend for email. Deployed on Vercel.
What 86 Database Models Means
The schema has 2,426 lines. It covers multi-tenant organizations, role-based access (owner, admin, staff, client, super admin), projects with phase-based timelines, tasks with Kanban workflows, a Slack-like messaging system with channels and DMs and reactions and threading, nested file management with brand kits and public share links, pin-based design reviews with AI-powered feedback analysis, quote proposals with negotiation tracking, invoices with full lifecycle management, playbooks that extract into task templates, a 28-tool AI assistant, email marketing sequences, and prospect CRM.
Every model relates to other models. Every relation has access control. Every API route checks auth context against the role hierarchy. One source of truth for permissions: a single file that every route references.
You don't vibe code 86 interconnected database models. You design them. The schema is the architecture. If the schema is wrong, everything built on top of it is wrong, and no amount of AI-generated code fixes a bad data model.
What I Actually Did vs What AI Did
I designed the schema. I decided what entities exist, how they relate, what the access patterns are. I laid out the role hierarchy and decided that ownership is determined by a foreign key, not a role string — a decision that prevents an entire class of permission bugs. Those are system design decisions. The model can help me implement them. It can't make them for me because it doesn't know what the product needs to do.
For scaffolding — API routes, Prisma models, React components from Louis's designs — the model does most of the heavy lifting. I describe what I need, review what comes back, and adjust. For a standard CRUD endpoint this is fast and works well. The model is good at this.
But when the messaging system had race conditions — messages arriving out of order in real-time subscriptions — that's a different kind of problem. I could describe the symptom to the model and it would suggest fixes, but I had to trace the actual state through browser DevTools and Firestore to understand what was happening first. When the Stripe webhook handler was silently failing because the event types didn't match our subscription model — I found that in Vercel logs and Stripe's dashboard, then explained the issue to the model and let it help fix the handler. When file uploads broke because Vercel has a 4.5MB body limit — I knew from four years on Firebase that direct-to-Storage uploads bypass this. The model didn't know that constraint existed.
The pattern isn't "AI writes, I review." It's more collaborative than that. Sometimes I describe a problem and the model solves it better than I would have. Sometimes the model goes completely sideways because it doesn't have the context about the system that I carry in my head. Knowing which situation you're in is the skill.
Louis and Me
The difference between Louis's v1 and the production app isn't code quality. Claude wrote decent code for both of us. The difference is that I knew what questions to ask. Does this need multi-tenancy? What happens when two organizations have a user in common? Where do file permissions get checked — at the API layer or the storage layer? What breaks when a webhook fires twice?
Louis couldn't ask those questions because he'd never built a production system before. I could because I'd spent four years debugging them. The models are the same. The questions are different.
That gap is shrinking. The models get better at catching things they used to miss. But someone still needs to understand the problem — what the product needs to do, what constraints the platform imposes, what breaks at scale that doesn't break in dev. Those instincts came from Sam, not from any language fluency.
Where It Stands
Louis is doing go-to-market right now. We're acquiring our first users. The platform works — not because I'm a great JavaScript developer, but because I understand how the pieces fit together.
644 commits in under two months. My JavaScript is still not great. I'm a better debugger than I am a coder, a better architect than I am a programmer, and I'm more useful with AI tools than most people who can write cleaner functions than me.
That's where things are heading. Not "can you code?" but "can you build?" They're not the same question anymore.