Project: AI Provider Gateway
- Difficulty: Intermediate
- Primary stack: TypeScript API, optional Python worker
- Estimated duration: 1 to 2 weeks
- Primary hiring signal: LLM backend architecture
- Primary monetization signal: reusable AI platform starter
Problem statement
Teams need one API surface for model calls, not provider-specific logic scattered across the product.
Project implementation
Stack
Folder
Open project folderWhat it proves
You can build a provider-agnostic LLM runtime with streaming, structured output, tracing, and mock-first development.
Next upgrade path
Connect this gateway to real OpenAI/Anthropic/local providers.
Key commands
npm ci
npm run typecheck
npm run build
npm run smokeCore workflows
- accept normalized chat requests
- route to a selected provider
- stream partial output
- validate final structured output
- log cost, latency, and errors
Architecture
Milestones
- Single provider, single schema, non-streaming response
- Add streaming transport
- Add second provider adapter
- Add tracing, retries, and fallback strategy
Acceptance criteria
- one endpoint supports validated JSON output
- one streamed endpoint emits partial tokens or events
- at least two provider adapters share one interface
- every run logs latency and estimated cost
Starter implementation note
The starter code under projects/p01-ai-provider-gateway includes a runnable MockProvider and an OpenAIProvider placeholder. The OpenAI adapter is intentionally not wired to the real API yet, so the default learning path should stay on PROVIDER=mock until you implement the provider-specific client.
Portfolio packaging
Show provider abstraction design, schema validation, trace screenshots, and a side-by-side provider comparison.
Monetization path
This project can become an internal gateway product, consulting accelerator, or open-core template with enterprise logging and guardrails.