Operations
Workers and Jobs
How the BullMQ worker system runs, what queues exist, and how to operate them safely.
@ai/jobs uses BullMQ with Redis (REDIS_URL) and starts workers from packages/jobs/worker.ts.
Use bun dev:worker during application development.
Use bun --filter=@ai/jobs worker:dev when isolating worker behavior from app runtime.
Use docker compose --profile worker up -d worker for containerized worker runs.
Queues
hello-worldtranscriptionfile-indexinglog-cleanup
Start Worker Processes
Local worker dev mode:
bun dev:workerDirect package script:
bun --filter=@ai/jobs worker:devDocker worker profile:
docker compose --profile worker up -d workerScheduled Jobs
log-cleanup repeat job is registered on worker startup:
- schedule: daily at
02:00 UTC - default retention payload:
30days - existing repeatable entries are removed then re-registered to avoid duplicates
Queue Behavior
- retries are configured per queue with exponential backoff
transcriptionandfile-indexingworkers run with concurrency2log-cleanupruns serially (concurrency: 1)- job cleanup is enabled (
removeOnComplete/removeOnFailthresholds)
Operational Checks
docker compose exec redis redis-cli ping
docker compose logs -f worker
docker compose logs -f appInspect BullMQ keys:
docker compose exec redis redis-cli
# KEYS bull:*Failure Patterns to Watch
- worker starts but no throughput: Redis connectivity or missing worker process
- repeated transcription failures: AI provider keys/quotas or converter/downstream file issues
- file indexing failures: extraction service failures, storage download issues, or memory service connectivity
- log cleanup not running: worker not running or schedule registration not completed at startup
Safe Operations Checklist
-
REDIS_URLresolves from worker runtime context - Worker process is running continuously in target environment
- Logs are monitored for failed jobs and worker errors
- Optional features are disabled when dependent services are absent