LiteLLM Agent Platform runs on Kubernetes. For local development, a single script provisions a kind cluster with everything pre-configured. For production, the recommended path is AWS EKS for the sandbox cluster and Render for the web and worker services.Documentation Index
Fetch the complete documentation index at: https://docs.litellm-agent-platform.ai/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
Make sure the following tools are installed before you begin:- Docker Desktop
kindkubectlhelm- A LiteLLM gateway URL and API key — LAP routes all model calls through a LiteLLM proxy
Local development setup
Configure environment variables
Copy the example env file and fill in the required values:At minimum, set these variables in
.env:| Variable | Description |
|---|---|
DATABASE_URL | Postgres connection string. The default points at the local container in docker-compose.yml. |
MASTER_KEY | Secret used to authenticate lap CLI and API calls. Choose a strong random string. |
LITELLM_API_BASE | URL of your LiteLLM gateway (e.g. https://your-litellm-proxy.example.com). |
LITELLM_API_KEY | API key for the LiteLLM gateway. |
PREINSTALLED_GITHUB_REPO | Git repo URL cloned into sandboxes when no per-agent repo is set. |
DATABASE_URL must be a direct (non-pooled) Postgres connection string. If you use Neon, select the “Direct” connection in Project → Connection Details — the pgbouncer pooler blocks the advisory locks that Prisma migrations require.Provision the kind cluster
Run the bootstrap script to create a local Kubernetes cluster:The script is idempotent — you can run it multiple times without side effects. It:
- Creates a kind cluster named
agent-sbx - Installs the agent-sandbox controller (v0.4.5)
- Opens NodePort mappings on ports 30000–30099 so the web service can reach sandbox pods
- Loads the local harness image (
opencode-sandbox:dev) into the cluster
Start the platform services
Open the web UI
Navigate to http://localhost:3000. Sign in with the
MASTER_KEY you set in .env.From here you can create agents, configure harness types, and monitor sessions. Once you have at least one agent, point the lap CLI at http://localhost:3000 and follow the quickstart.Tear down the local cluster
Deploying to production
For production deployments, the recommended architecture is:- AWS EKS for the Kubernetes sandbox cluster
- Render for the web server and background worker
deploy/ directory in the repository contains everything you need:
bin/eks-up.shprovisions the EKS cluster with the correct node groups and the agent-sandbox controller installed.deploy/render/README.mdincludes a Render Blueprint that deploys the web and worker services in one click.
Required environment variables for production
Set these as secrets in your Render service (or equivalent):| Variable | Description |
|---|---|
DATABASE_URL | Direct Postgres connection string (Neon or RDS). |
MASTER_KEY | Authentication secret for the API and web UI. |
LITELLM_API_BASE | Your LiteLLM gateway URL. |
LITELLM_API_KEY | API key for the LiteLLM gateway. |
PREINSTALLED_GITHUB_REPO | Fallback repo URL cloned into sandboxes. |
BASE_URL | Public URL of your LAP deployment. Required for OAuth redirect URIs. |
ENCRYPTION_KEY | Base64-encoded 32-byte key for encrypting stored OAuth tokens. Generate with: node -e "console.log(require('crypto').randomBytes(32).toString('base64'))" |
Harness images are snapshotted at agent-creation time. If you update
K8S_HARNESS_IMAGE or K8S_HARNESS_IMAGE_OPENCODE after creating an agent, existing agents keep their old image. Delete and recreate the agent to pick up the new image.Next steps
- Follow the quickstart to install
lapand open your first sandbox. - Review the environment variable reference for the full list of tuning options.
- See
deploy/render/README.mdin the repository for the one-click Render Blueprint.