Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.litellm-agent-platform.ai/llms.txt

Use this file to discover all available pages before exploring further.

The lap CLI talks to a running LAP instance. If you haven’t deployed one yet, see Installation.
First time? Install the CLI:
git clone https://github.com/BerriAI/litellm-agent-platform.git
cd litellm-agent-platform/cli && npm install
ln -sf "$PWD/bin/lap.mjs" ~/.local/bin/lap

1. Log in

lap login
Paste your LAP URL and MASTER_KEY when prompted. Saved to ~/.lap/config.json.

2. Open an agent

lap
Pick your Hermes agent from the interactive list. lap spins up a Kubernetes-sandboxed pod running NousResearch/hermes-agent in --tui mode (the Ink-based interactive UI), attaches your local terminal to its TTY over a WebSocket, and drops you straight in. The harness routes Hermes through your LiteLLM gateway by mapping LITELLM_API_BASE / LITELLM_API_KEY β†’ OPENAI_BASE_URL / OPENAI_API_KEY at boot β€” so Hermes talks to your gateway as if it were OpenAI direct, including for non-OpenAI providers. Hermes persists state under ~/.hermes/ (sessions, memories, skills) inside the pod’s writable filesystem. State does not survive across fresh lap sessions. Press Ctrl-D to detach β€” the session stays alive for 24h.

Creating an agent

If you’re setting up the platform, create a Hermes agent first. In the UI choose hermes from the Harness picker and pick a model, or via API:
curl -X POST $LAP_URL/api/v1/managed_agents/agents \
  -H "Authorization: Bearer $MASTER_KEY" \
  -H "Content-Type: application/json" \
  -d '{"name":"my-hermes","harness_id":"hermes","model":"openai/gpt-4o"}'
Hermes accepts any model your LiteLLM gateway can route. Inside the TUI you can also hermes model to switch on the fly.