Home/Case studies/BuildStack

Solo AI-assisted SaaS development · Solo

BuildStack: The AI writes better code when the codebase gives it something to work with

Alex is a solo maker building developer tooling. He'd been using AI coding assistants for over a year when he noticed a consistent pattern: the generated code quality was almost entirely a function of codebase context, not model quality. A well-structured codebase with clear conventions and explicit documentation produced coherent, mergeable code. An organically grown codebase without clear patterns produced plausible-looking code that required significant rework. ShipAI solved the second problem.

Alex Oduya

Alex Oduya

Indie Maker, BuildStack

Background

Alex had been serious about AI-assisted development since before it was fashionable. He'd tried every major tool, developed strong opinions about prompting, and tracked his own metrics: how many AI-generated PRs could he merge with minor changes versus major rework. His honest answer before ShipAI was about 35% minor-change merges. The other 65% needed structural work — not because the AI logic was wrong, but because the generated code used different patterns than the existing codebase. Different error handling styles, different data fetching approaches, different file naming conventions. The AI wasn't wrong; it just didn't know the rules.

The challenge

Alex needed a codebase that would give AI tools enough explicit context to generate structurally consistent code. That meant documented conventions, consistent patterns that repeated enough to be learned from examples, and clear service boundaries that told the AI where code belonged. It also meant TypeScript types that were comprehensive enough that the AI wouldn't guess at data shapes.

How they built it

Context is the product

The first thing Alex did with ShipAI was read the AGENTS.md file — a spec document written for AI coding agents that describes the architecture, conventions, patterns, and examples. He'd never seen that in a boilerplate before. He includes it in every AI coding session, not as a prompt trick but as genuine context. The AI generates code that follows the patterns in that document. Structurally correct code on the first pass.

The types make the difference

When an AI tool can see the TypeScript types for every data structure it's working with — database schema, API response shapes, session types, billing plan definitions — the generated code is meaningfully better. ShipAI's type definitions are comprehensive. Alex doesn't see generated code using `any` for data shapes it should know about. He doesn't see database queries that don't match the schema. The types act as guardrails.

Patterns that repeat enough to be learned

Every API route in ShipAI follows the same structure. Every component data flow follows the same pattern. The AI learns from this repetition within the context window — by the time it's writing a new route or component, it's seen enough examples of how it should look that the generated code fits. Alex's merge rate went from 35% minor-change to over 80% minor-change. The remaining 20% is usually domain-specific logic he'd have to review regardless.

Shipping three times as many features

Alex tracks his weekly completed PRs. In the three months before adopting ShipAI, his average was about four per week. In the three months after, it was around twelve. The difference isn't him working more — it's the AI writing code he can actually use. Reviewing product logic is faster than reviewing product logic plus fixing structural problems.

Outcomes

3x weekly feature output

Weekly completed PRs went from an average of four to approximately twelve over a three-month period. Alex attributes the increase to AI-generated code requiring structural cleanup in fewer than 20% of cases, down from 65%.

AI code cleanup rate from 65% to under 20%

The fraction of AI-generated pull requests requiring structural changes before merging dropped from 65% to under 20% after adopting ShipAI.

Zero architecture regressions from AI-generated code

In four months of AI-assisted development on ShipAI, no AI-generated code has introduced an architectural regression or required a structural revert.

Reviewing logic, not patterns

Alex's PR review process changed: he now reviews whether the business logic is correct, not whether the structural approach is acceptable. The structure is assumed to be correct because the AI follows the codebase's patterns.

In their own words

I've been doing AI-assisted development seriously for over a year. The model matters less than people think. What matters is the context you give it. A codebase that's consistent, well-typed, and has explicit conventions produces dramatically better output than one that's grown organically. ShipAI is the first codebase I've worked in that was designed with that in mind from the start.

Alex Oduya

Alex Oduya

Indie Maker, BuildStack

I build almost everything with AI assistance now. The difference isn't the model — it's whether the codebase gives the AI enough context to generate something useful. ShipAI's structure does that from day one. My output is faster and the code doesn't need nearly as much cleanup.

Alex Oduya

Frequently asked questions

What's in the AGENTS.md file?

The codebase architecture, key file locations, naming conventions, data flow patterns, service boundaries, and concrete examples of common implementation tasks — adding an API route, adding an auth method, extending the billing model. It's written to give an AI tool enough context to make consistent choices without exploring the codebase from scratch.

Does Alex use Cursor specifically, or does this work with other AI tools?

He uses Cursor primarily but has tested the same approach with Claude and GitHub Copilot Chat. The AGENTS.md and consistent patterns benefit any context-window-based tool. The quality improvement is consistent across all three.

How does Alex handle features that don't have existing patterns to follow?

He writes the first implementation himself, establishing the pattern, then adds it to AGENTS.md. From that point, the AI can follow it. He treats AGENTS.md as a living spec — it gets a small update whenever a new pattern is established.

Keywords

buildstack case studysolo ai-assisted saas development case studyshipai.today customer storynext.js saas case studyai saas launch story

https://shipai.today/cases/alex-oduya

Ready to write your own case study?

Start from the same foundation.

Every outcome in these case studies started from ShipAI.today. Production auth, billing, AI infrastructure, admin panel — all included.

  • 12 builders and counting
  • All features from these case studies included
  • Full landing source + SEO infrastructure
See pricing