Skip to main content
Lyceum Cloud accepts three execution types: Python, Docker image, and Docker Compose. Pick the one that matches how your code is packaged.
You have…Use
A .py script and a requirements.txtPython
A container image (public or private)Docker image
A multi-service stack (docker-compose.yml)Docker Compose
All three submission paths return an execution_id plus a streaming URL. From that point on you read logs, fetch results, and abort through the same set of endpoints — see Runs.

Choosing a hardware profile

Every run targets a hardware profile (e.g. cpu, gpu.a100, gpu.h100). You can only launch on profiles your account is authorised for. Three endpoints help you check what’s available:
EndpointReturns
GET /machine-typesThe full catalogue with hourly pricing
GET /user/quotas/available-hardwareJust the profiles your account can use
GET /resources/available-resourcesDetailed hardware specs and pricing
If a profile you need isn’t listed, contact info@lyceum.technology.

Python

Python is the most common entry point. The CLI handles packaging your script, bundling local imports, and installing pip requirements before the code runs. The .lyceum/config.json workspace file lets you persist requirements and import paths so you don’t need to repeat them on every invocation.
lyceum python run script.py --machine gpu.a100 --requirements requirements.txt
FlagDescription
-m, --machineHardware profile (cpu, gpu.a100, gpu.h100, …)
-r, --requirementsPath to a requirements.txt
-f, --file-nameDisplay name for the run
--importLocal imports to bundle
--use-config / --no-configToggle reading .lyceum/config.json
-d, --debugEnable debug logging
Manage the workspace config (.lyceum/config.json) for shared dependencies and import paths:
lyceum python config init      # create
lyceum python config show      # print current config
lyceum python config refresh   # regenerate

Docker image

Use Docker when your environment is already containerised — for example a CUDA image with preinstalled dependencies, or a job that doesn’t fit the Python entrypoint cleanly. The platform pulls the image, runs the command you specify, and streams stdout/stderr back. For private registries, the request supports two credential modes:
  • Basic auth — username and password for any registry
  • AWS — access key, secret key, session token, and region for Amazon ECR
lyceum docker run python:3.11-slim -c "python -c 'print(1)'" -m cpu
FlagDefaultDescription
-c, --commandCommand to run inside the container
-e, --envEnvironment variable, e.g. KEY=value (repeatable)
-m, --machinecpuMachine type
--s3 / --no-s3onMount your storage bucket inside the container
--s3-mount-path/mnt/s3Where to mount the bucket inside the container
--callbackWebhook URL for completion notification
--registry-credsRegistry credentials as a JSON string
--registry-typeRegistry credential type (basic, aws)
By default the bucket is mounted at /mnt/s3 inside the container.

Docker Compose

For multi-service stacks (e.g. an app talking to a database) you can submit a whole docker-compose.yml. The platform brings up all services on the same machine and tears them down when the entrypoint service exits.
lyceum compose run docker-compose.yml --machine gpu.a100
FlagDescription
-m, --machineHardware profile
--env-fileEnv file passed to the stack

Aborting

Each execution type has its own abort endpoint. Use abort to immediately kill a run; for graceful stop (notebooks, interactive sessions), use POST /workloads/stop/{execution_id} instead.
POST /execution/streaming/abort/{execution_id}
POST /execution/image/abort/{execution_id}
POST /execution/compose/abort/{execution_id}

See also

Worked examples

Real CLI and curl invocations for Python and Docker runs.

Runs

Monitor, log, and abort executions after they’re submitted.