Skip to main content
The Lyceum Cloud REST API exposes every dashboard feature: launching runs and VMs, deploying inference, managing storage and secrets, billing, and account settings. Every endpoint is listed in the Endpoints group in the sidebar, generated directly from the OpenAPI spec.

Base URL

https://api.lyceum.technology/api/v2/external

Authentication

Every request takes a bearer token in the Authorization header. Two token types are accepted:
TokenFormatLifetimeWhen to use
API keylk_...Long-lived, until revoked or expiredCLI, CI, scripts, integrations
JWTStandard JWTShort-lived, refreshable via /auth/refreshInteractive sessions, dashboard, browser-based testing
Generate API keys from the API Keys page in the dashboard. The full key value is shown exactly once at creation — store it in a secret manager immediately.
curl https://api.lyceum.technology/api/v2/external/billing/credits \
  -H "Authorization: Bearer lk_your_api_key"

Login (JWT flow)

curl -X POST https://api.lyceum.technology/api/v2/external/auth/login \
  -H "Content-Type: application/json" \
  -d '{"email": "you@example.com", "password": "your-password"}'
The response includes access_token and refresh_token. Pass the access token as Authorization: Bearer <access_token>. When it expires, call POST /auth/refresh with the refresh token to get a new pair.

Validation errors

Endpoints return HTTP 422 with a structured HTTPValidationError body when the request payload is malformed or missing required fields. Other failures return standard HTTP status codes (400, 401, 403, 404, 5xx) with a detail field describing the error.

Endpoint groups

The full endpoint list is in the sidebar under Endpoints, grouped by tag. Highlights:
GroupPurposeDoc page
Authentication/auth/login, /auth/refresh, /auth/api-keys/...API Keys
MFATOTP enrolment, verification, backup codesSettings
Streaming ExecutionSubmit Python runs, fetch status, abortLaunch a Run
Docker ExecutionSubmit Docker image runsLaunch a Run
Docker Compose ExecutionSubmit Compose stacksLaunch a Run
GPU Selection ExecutionFan out across GPU typesRuns
Workload ManagementList, abort, stop runsRuns
Execution ManagementGet/delete a run, fetch timingRuns
Observability - LogsLoki-backed log queriesLogs
Observability - GPU MetricsDCGM and system metrics per executionGPU & System Metrics
Machine TypesHardware catalogue and pricingLaunch a Run
User QuotasHardware profiles your account can useSettings
Storage Files / Storage CredentialsPer-user S3 bucketStorage
Environment VariablesSecrets injected into runsSecrets
Dedicated Deployment ExternalCreate, get, list, stop dedicated deploymentsDedicated Inference
Streaming InferenceSSE streaming for inference resultsStreaming Inference
Batch APIOpenAI-compatible files and batches
BillingCredits, history, invoices, vouchersBilling
VMsProvision and manage GPU virtual machinesYour VMs
For end-to-end worked examples — submit a run, poll status, fetch logs and metrics — see End-to-End API Workflow.