Background
Jerome had been an ML engineer at two companies before going solo. He knew how to build retrieval systems — embeddings, vector stores, ranking, reranking. What he didn't have was the product around it: auth, billing, a web interface, streaming AI responses, the deployment pipeline. His plan had been to build the infrastructure first, then the product layer. Three months in, the infrastructure was half-done and the product didn't exist. When an investor he'd been talking to offered a demo slot in six weeks, Jerome did the math and realized the plan wasn't going to work.
The challenge
The technical requirements for the demo were specific: live vector search on a real document corpus, relationship traversal for connecting related concepts, streaming AI responses that felt responsive, and a product that looked like something you'd pay for — not a research notebook, not a command-line tool. Jerome needed all of that working and polished in six weeks, with most of that time spent on the parts that were uniquely his.
How they built it
Six days to rebuild the foundation
Jerome found ShipAI, read through the codebase for half a day, and decided to rebuild on it. He spent the next six days migrating his vector retrieval logic onto the existing AI infrastructure — the streaming handler, the tool use system, the database layer. His Qdrant integration plugged in cleanly. The auth and billing layers he'd been procrastinating on were already there. By day six, the core product loop was working.
The parts only he could build
With the foundation handled, Jerome spent the next four weeks on the things he'd actually been hired to build: the knowledge graph schema, the semantic ranking logic, the concept-to-concept relationship traversal, the query interface. This was the work he'd been delaying for three months while building infrastructure. It turned out four weeks was enough time to do it well.
The demo on real data
In week five, Jerome loaded the demo with a real document corpus from a public dataset relevant to the investor's domain. He ran through the demo flow a dozen times, tweaked the streaming response formatting, adjusted the search ranking. On demo day, the product responded to natural language queries with streaming AI answers backed by actual retrieved documents. The investor asked if they could keep the login.
From demo to product
After the round closed, Jerome went back to the same codebase — not a new one. The demo infrastructure was the production infrastructure. He extended the vector retrieval, added new knowledge graph schemas, built the enterprise onboarding flow. He didn't rewrite anything. The foundation that worked for the demo worked for the product.
Outcomes
Seed round closed on the demo
The investor committed within a week of the demo. Two others joined the round in the following three weeks.
Demo-ready product in six days
Jerome rebuilt his entire product foundation in six days — auth, billing, AI streaming, database layer — and spent the remaining five weeks on the product itself.
Three months of infrastructure work abandoned for a better approach
Jerome's self-built infrastructure stack represented roughly three months of work. He replaced it in six days. He describes that as one of his best decisions.
No rewrite after the round
The demo codebase became the production codebase. Jerome has been shipping on the same foundation since day six.
In their own words
I spent three months building infrastructure I thought I had to build myself. That was a mistake — not because the work was bad but because it was available off the shelf in a form that would have taken me six days instead of three months. The painful part is that I knew vector stores, I knew retrieval, I knew what good looked like. I just didn't know I could skip everything else.
“I had six weeks to build something investors would believe in. I'd already spent three months on infrastructure that wasn't the product. ShipAI was the reset I needed — it let me spend those six weeks on the parts only I could build.”
— Jerome Okafor
Frequently asked questions
Why did Jerome choose to rebuild rather than extend what he had?
His self-built infrastructure had accumulated design decisions he wasn't happy with. He saw the ShipAI codebase as the right foundation and made the call to start clean. Six days of rebuilding versus months of patching.
How does Contextual AI's vector search work in the product?
Users submit natural language queries. The query is embedded and used to retrieve the top-k most semantically similar documents from Jerome's Qdrant collection. Those documents, along with relationship context from the knowledge graph, are injected into the AI context before a streaming response is generated.
What happened after the seed round closed?
Jerome continued building on the same codebase. He added new knowledge graph schemas, built an enterprise onboarding flow, and extended the ranking logic. The foundation has held up without changes under production load.