Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.litellm-agent-platform.ai/llms.txt

Use this file to discover all available pages before exploring further.

LiteLLM Agent Platform runs on Kubernetes. For local development, a single script provisions a kind cluster with everything pre-configured. For production, the recommended path is AWS EKS for the sandbox cluster and Render for the web and worker services.

Prerequisites

Make sure the following tools are installed before you begin:

Local development setup

1

Clone the repository

git clone https://github.com/BerriAI/litellm-agent-platform.git
cd litellm-agent-platform
2

Configure environment variables

Copy the example env file and fill in the required values:
cp .env.example .env
At minimum, set these variables in .env:
VariableDescription
DATABASE_URLPostgres connection string. The default points at the local container in docker-compose.yml.
MASTER_KEYSecret used to authenticate lap CLI and API calls. Choose a strong random string.
LITELLM_API_BASEURL of your LiteLLM gateway (e.g. https://your-litellm-proxy.example.com).
LITELLM_API_KEYAPI key for the LiteLLM gateway.
PREINSTALLED_GITHUB_REPOGit repo URL cloned into sandboxes when no per-agent repo is set.
DATABASE_URL must be a direct (non-pooled) Postgres connection string. If you use Neon, select the “Direct” connection in Project → Connection Details — the pgbouncer pooler blocks the advisory locks that Prisma migrations require.
3

Provision the kind cluster

Run the bootstrap script to create a local Kubernetes cluster:
bin/kind-up.sh
The script is idempotent — you can run it multiple times without side effects. It:
  • Creates a kind cluster named agent-sbx
  • Installs the agent-sandbox controller (v0.4.5)
  • Opens NodePort mappings on ports 30000–30099 so the web service can reach sandbox pods
  • Loads the local harness image (opencode-sandbox:dev) into the cluster
The NodePort range caps concurrent live sandboxes at 100. This is fine for development. For higher fan-out in production, use an ingress controller instead of NodePort.
4

Start the platform services

docker compose up
This boots Postgres, runs the Prisma schema migration, and starts the web server on port 3000 and the background worker.
5

Open the web UI

Navigate to http://localhost:3000. Sign in with the MASTER_KEY you set in .env.From here you can create agents, configure harness types, and monitor sessions. Once you have at least one agent, point the lap CLI at http://localhost:3000 and follow the quickstart.

Tear down the local cluster

kind delete cluster --name agent-sbx

Deploying to production

For production deployments, the recommended architecture is:
  • AWS EKS for the Kubernetes sandbox cluster
  • Render for the web server and background worker
The deploy/ directory in the repository contains everything you need:
  • bin/eks-up.sh provisions the EKS cluster with the correct node groups and the agent-sandbox controller installed.
  • deploy/render/README.md includes a Render Blueprint that deploys the web and worker services in one click.

Required environment variables for production

Set these as secrets in your Render service (or equivalent):
VariableDescription
DATABASE_URLDirect Postgres connection string (Neon or RDS).
MASTER_KEYAuthentication secret for the API and web UI.
LITELLM_API_BASEYour LiteLLM gateway URL.
LITELLM_API_KEYAPI key for the LiteLLM gateway.
PREINSTALLED_GITHUB_REPOFallback repo URL cloned into sandboxes.
BASE_URLPublic URL of your LAP deployment. Required for OAuth redirect URIs.
ENCRYPTION_KEYBase64-encoded 32-byte key for encrypting stored OAuth tokens. Generate with: node -e "console.log(require('crypto').randomBytes(32).toString('base64'))"
Never set K8S_SKIP_TLS_VERIFY=true against a production cluster. This option exists only for local kind development and disables TLS certificate validation entirely.
Harness images are snapshotted at agent-creation time. If you update K8S_HARNESS_IMAGE or K8S_HARNESS_IMAGE_OPENCODE after creating an agent, existing agents keep their old image. Delete and recreate the agent to pick up the new image.

Next steps

  • Follow the quickstart to install lap and open your first sandbox.
  • Review the environment variable reference for the full list of tuning options.
  • See deploy/render/README.md in the repository for the one-click Render Blueprint.