ShipAI Docs
Codebase

Codebase

Architecture map of the ship-ai monorepo: boundaries, ownership, and execution model.

This section documents how ship-ai is structured and how requests move through the system.

The repository is a Bun + Turborepo monorepo with three Next.js apps and shared platform packages. The architecture is optimized for:

  • reusable domain logic in packages/*
  • thin app layers in apps/*
  • environment-driven feature/service toggles
  • streaming AI workflows with billing, tracing, and persistence integrated

High-Level Repository Shape

app/
admin/
web/
db/
auth/
billing/
jobs/
memory/
features/
stripe/
tracing/
logger/
storage/
ui/
i18n/
OPTIONAL_SERVICES.md
TELEGRAM_BOT.md
TROUBLESHOOTING.md
package.json
turbo.json
docker-compose.yml

What This Section Covers

Architectural Baseline

  • Runtime + orchestration: Bun (bun@1.1.26) + Turborepo
  • Frontend framework: Next.js 16 App Router (all apps)
  • Persistence: PostgreSQL via Drizzle (@ai/db)
  • Auth: Better Auth (@ai/auth)
  • Queueing: BullMQ workers (@ai/jobs) on Redis
  • Storage: S3-compatible object storage (@ai/storage)
  • AI path: handler-based orchestration in apps/app/src/lib/ai/*
  • Observability: structured logs + tracing packages (@ai/logger, @ai/tracing)

Read This Before Implementing Features

A practical implementation rule in this repo:

  1. Put reusable business logic in packages/*.
  2. Keep app route handlers and server actions focused on orchestration and auth/context.
  3. Use @ai/db queries and schema as the canonical data layer.
  4. Wire feature/service behavior through @ai/features and env flags instead of hardcoded branching.

If you are adding anything non-trivial, start with Monorepo Structure and Data Flow first.

Docs maintenance tip

When source code changes, update this section alongside README.md, FEATURES.md, and docs/OPTIONAL_SERVICES.md in ship-ai to keep architecture boundaries consistent.

On this page