Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.litellm-agent-platform.ai/llms.txt

Use this file to discover all available pages before exploring further.

The lap CLI talks to a running LAP instance. If you haven’t deployed one yet, see Installation.
First time? Install the CLI:
git clone https://github.com/BerriAI/litellm-agent-platform.git
cd litellm-agent-platform/cli && npm install
ln -sf "$PWD/bin/lap.mjs" ~/.local/bin/lap

1. Log in

lap login
Paste your LAP URL and MASTER_KEY when prompted. Saved to ~/.lap/config.json.

2. Open an agent

lap
Pick your Codex agent from the interactive list. lap spins up a Kubernetes-sandboxed pod running the OpenAI Codex CLI, attaches your local terminal to its TTY over a WebSocket, and drops you straight in. The harness routes Codex through your LiteLLM gateway by mapping LITELLM_API_BASE / LITELLM_API_KEY β†’ OPENAI_BASE_URL / OPENAI_API_KEY at boot β€” so Codex talks to your gateway as if it were OpenAI direct. Press Ctrl-D to detach β€” the session stays alive for 24h.

Creating an agent

If you’re setting up the platform, create a Codex agent first. In the UI choose codex from the Harness picker and pick an OpenAI model, or via API:
curl -X POST $LAP_URL/api/v1/managed_agents/agents \
  -H "Authorization: Bearer $MASTER_KEY" \
  -H "Content-Type: application/json" \
  -d '{"name":"my-codex","harness_id":"codex","model":"openai/gpt-4o"}'