Skip to main content

Learning Path

Phase 1: Build core LLM application skill

Start with provider abstractions, prompt contracts, streaming, and structured outputs. The main outcome is an API layer that can talk to multiple models while keeping response formats stable.

Phase 2: Add retrieval and grounded answers

Learn how document pipelines, chunking, embeddings, retrieval, reranking, and citations turn a chat endpoint into a knowledge system.

Phase 3: Move from single calls to orchestration

Add planning, state, tools, multi-step execution, retries, and approvals. This is the move from “chat feature” to “agent workflow.”

Phase 4: Productize the operator experience

Expose plans, timelines, approvals, sources, and execution state inside a serious UI. This is where many demos fail and real products begin.

Phase 5: Harden for production

Security, evals, observability, CI/CD, and deployment are first-class topics because enterprise adoption depends on them.

Completion criteria

You are done when you can explain and demonstrate:

  • one production-ready provider integration
  • one grounded RAG workflow
  • one multi-step tool-using orchestrator
  • one operator-facing UI with approvals
  • one release process with observability and deployment documentation