| You have… | Use |
|---|---|
A .py script and a requirements.txt | Python |
| A container image (public or private) | Docker image |
A multi-service stack (docker-compose.yml) | Docker Compose |
execution_id plus a streaming URL. From that point on you read logs, fetch results, and abort through the same set of endpoints — see Runs.
Choosing a hardware profile
Every run targets a hardware profile (e.g.cpu, gpu.a100, gpu.h100). You can only launch on profiles your account is authorised for. Three endpoints help you check what’s available:
| Endpoint | Returns |
|---|---|
GET /machine-types | The full catalogue with hourly pricing |
GET /user/quotas/available-hardware | Just the profiles your account can use |
GET /resources/available-resources | Detailed hardware specs and pricing |
Python
Python is the most common entry point. The CLI handles packaging your script, bundling local imports, and installing pip requirements before the code runs. The.lyceum/config.json workspace file lets you persist requirements and import paths so you don’t need to repeat them on every invocation.
- CLI
- REST API
| Flag | Description |
|---|---|
-m, --machine | Hardware profile (cpu, gpu.a100, gpu.h100, …) |
-r, --requirements | Path to a requirements.txt |
-f, --file-name | Display name for the run |
--import | Local imports to bundle |
--use-config / --no-config | Toggle reading .lyceum/config.json |
-d, --debug | Enable debug logging |
.lyceum/config.json) for shared dependencies and import paths:Docker image
Use Docker when your environment is already containerised — for example a CUDA image with preinstalled dependencies, or a job that doesn’t fit the Python entrypoint cleanly. The platform pulls the image, runs the command you specify, and streams stdout/stderr back. For private registries, the request supports two credential modes:- Basic auth — username and password for any registry
- AWS — access key, secret key, session token, and region for Amazon ECR
- CLI
- REST API
| Flag | Default | Description |
|---|---|---|
-c, --command | — | Command to run inside the container |
-e, --env | — | Environment variable, e.g. KEY=value (repeatable) |
-m, --machine | cpu | Machine type |
--s3 / --no-s3 | on | Mount your storage bucket inside the container |
--s3-mount-path | /mnt/s3 | Where to mount the bucket inside the container |
--callback | — | Webhook URL for completion notification |
--registry-creds | — | Registry credentials as a JSON string |
--registry-type | — | Registry credential type (basic, aws) |
/mnt/s3 inside the container.Docker Compose
For multi-service stacks (e.g. an app talking to a database) you can submit a wholedocker-compose.yml. The platform brings up all services on the same machine and tears them down when the entrypoint service exits.
- CLI
- REST API
| Flag | Description |
|---|---|
-m, --machine | Hardware profile |
--env-file | Env file passed to the stack |
Aborting
Each execution type has its own abort endpoint. Useabort to immediately kill a run; for graceful stop (notebooks, interactive sessions), use POST /workloads/stop/{execution_id} instead.
See also
Worked examples
Real CLI and curl invocations for Python and Docker runs.
Runs
Monitor, log, and abort executions after they’re submitted.

