Skip to main content
The simplest workflow: submit a Python snippet, get back an execution_id, fetch the result.

CLI

lyceum python run "print('hello from lyceum')" --machine cpu
Run a file with a requirements.txt:
lyceum python run script.py --machine gpu.a100 --requirements requirements.txt

REST API

curl -X POST https://api.lyceum.technology/api/v2/external/execution/streaming/start \
  -H "Authorization: Bearer $LYCEUM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "code": "print(1 + 1)",
    "machine_type": "cpu"
  }'
The response includes an execution_id and a streaming URL for live output.

Fetching results

# Get the full execution record (status, stdout, stderr, result)
curl https://api.lyceum.technology/api/v2/external/execution/<execution_id> \
  -H "Authorization: Bearer $LYCEUM_API_KEY"
# Get just the logs
curl https://api.lyceum.technology/api/v2/external/logs/execution/<execution_id> \
  -H "Authorization: Bearer $LYCEUM_API_KEY"