Install
lyceum.
Authenticate
--manual to be prompted for tokens directly, or supply an API key with --api-key lk_....
Command groups
| Group | Purpose |
|---|---|
auth | Sign in, sign out, check status |
python | Submit Python runs |
docker | Submit Docker image runs |
compose | Submit Docker Compose runs |
gpu-selection | Fan out runs across GPU types |
notebook | Launch and manage Jupyter notebook sessions |
workloads | List, abort, and view history of runs |
infer | Deploy models, chat with deployments, manage them |
storage | Manage files in your storage bucket |
vm | Provision and manage GPU virtual machines |
Global flags
| Flag | Description |
|---|---|
--version, -v | Print the CLI version |
auth
lyceum auth login
Sign in to Lyceum Cloud.
| Flag | Description |
|---|---|
--url <base> | Override the API base URL (for development) |
--dashboard-url <url> | Override the dashboard URL |
--manual | Use manual token entry instead of the browser flow |
--api-key <key> | Sign in directly with an API key |
lyceum auth logout
Clear stored credentials.
lyceum auth status
Show the current authentication status.
python
lyceum python run <code-or-file>
Execute a Python snippet or file on Lyceum Cloud.
| Flag | Default | Description |
|---|---|---|
-m, --machine <type> | cpu | Machine type (cpu, a100, h100, …) |
-f, --file-name <name> | — | Name for the execution |
-r, --requirements <path> | — | Requirements file path or pip requirements string |
--import <module> | — | Pre-import a module (repeatable) |
--use-config / --no-config | use | Read .lyceum/config.json workspace config |
-d, --debug | off | Show debug information about config, requirements, and payload |
lyceum python config init|show|refresh
Manage the workspace config at .lyceum/config.json. init creates one, show prints the current config, refresh regenerates it.
docker
lyceum docker run <image>
Run a Docker container on Lyceum Cloud.
| Flag | Default | Description |
|---|---|---|
-m, --machine <type> | cpu | Machine type |
-t, --timeout <sec> | 300 | Execution timeout in seconds |
-f, --file-name <name> | — | Name for the execution |
-c, --command <cmd> | — | Command to run in the container |
-e, --env <KEY=VAL> | — | Environment variable (repeatable) |
-d, --detach | off | Run container in background and print execution ID |
--callback <url> | — | Webhook URL for completion notification |
--registry-creds <json> | — | Registry credentials as a JSON string |
--registry-type <type> | — | Registry credential type (basic, aws) |
--s3 / --no-s3 | on | Mount your storage bucket inside the container |
--s3-mount-path <path> | /mnt/s3 | Where to mount the bucket inside the container |
--graceful-timeout <sec> | 10 | Seconds to wait for graceful shutdown on cancel |
lyceum docker logs <execution-id>
Stream logs from a Docker run.
lyceum docker registry-examples
Print example registry credential payloads.
compose
lyceum compose run <compose-file>
Run a Docker Compose stack on Lyceum Cloud.
| Flag | Default | Description |
|---|---|---|
-m, --machine <type> | cpu | Hardware profile |
-t, --timeout <sec> | 300 | Execution timeout in seconds |
-f, --file-name <name> | — | Display name for the run |
-d, --detach | off | Submit and return immediately |
--callback <url> | — | Webhook URL for completion notification |
--registry-creds <json> | — | Registry credentials as a JSON string |
--registry-type <type> | — | Registry credential type (basic, aws) |
--graceful-timeout <sec> | 10 | Seconds to wait for graceful shutdown on cancel |
lyceum compose logs <execution-id>
Stream logs from a Compose run.
lyceum compose registry-examples
Print example registry credential payloads.
gpu-selection
lyceum gpu-selection run <code-or-file>
Submit Python code that fans out across GPU types so you can compare them.
| Flag | Default | Description |
|---|---|---|
-f, --file-name <name> | — | Display name for the parent run |
-t, --timeout <sec> | 60 | Per-sub-job timeout (1–600) |
-r, --requirements <path> | — | Path to a requirements.txt |
--import <module> | — | Pre-import a module (repeatable) |
--use-config / --no-config | use | Read .lyceum/config.json |
--optimize <metric> | — | Optimisation objective for selection |
-d, --debug | off | Show debug information |
lyceum gpu-selection status <execution-id>
Get the parent run status and per-sub-job results.
notebook
lyceum notebook launch
Launch a Jupyter notebook server on Lyceum Cloud. Returns a URL you can open in a browser.
| Flag | Default | Description |
|---|---|---|
-m, --machine <type> | cpu | Hardware profile |
-t, --timeout <sec> | 600 | Session timeout (max 600) |
-i, --image <ref> | jupyter/base-notebook:latest | Custom Jupyter image |
--token <token> | lyceum | Jupyter notebook token |
-p, --port <port> | 8888 | Port for the Jupyter server |
lyceum notebook list
List notebook sessions.
lyceum notebook stop <execution-id>
Stop a running notebook session.
workloads
lyceum workloads list
List your runs.
| Flag | Default | Description |
|---|---|---|
-n, --limit <n> | 10 | Number of executions to show |
lyceum workloads abort <execution-id>
Hard-stop a run. The run is marked aborted.
lyceum workloads history
Show recent execution history.
| Flag | Default | Description |
|---|---|---|
-n, --limit <n> | 10 | Number of executions to show |
infer
lyceum infer deploy <hf-model-id>
Deploy a Hugging Face model as a dedicated inference endpoint.
| Flag | Default | Description |
|---|---|---|
-g, --gpu <profile> | gpu.a100 | Hardware profile |
-t, --hf-token <token> | — | Hugging Face token (for gated models) |
--min-replicas <n> | 1 | Minimum replicas to keep running |
--max-replicas <n> | 1 | Maximum replicas allowed |
--target-rps <rps> | 10.0 | Target requests/sec per replica for scale-up |
--target-latency <ms> | 5000.0 | Target p95 latency in milliseconds for scale-up |
--stabilisation <sec> | 300 | Scale-down stabilisation window |
-w, --wait | off | Block until the deployment has healthy replicas |
lyceum infer status <deployment-id>
Get the status of a deployment.
| Flag | Default | Description |
|---|---|---|
-a, --all | off | Include stopped deployments |
lyceum infer stop <deployment-id>
Stop a deployment.
lyceum infer models
List available models.
| Flag | Default | Description |
|---|---|---|
-a, --all | off | Include stopped deployments |
lyceum infer chat
Send a chat completion to a deployed model.
| Flag | Default | Description |
|---|---|---|
-d, --deployment <id> | — | Deployment ID to target |
-m, --model <id> | — | Alias for --deployment |
-p, --prompt <text> | — | Message text or path to a .txt/.yaml/.xml file |
-i, --image <path> | — | Image file path (for multimodal models) |
--image-url <url> | — | Image URL (for multimodal models) |
-s, --system <text> | — | System message |
-t, --tokens <n> | 1000 | Max output tokens |
--temperature <t> | 0.7 | Sampling temperature |
-a, --async | off | Submit async, return request ID immediately |
--timeout <sec> | 60 | Request timeout (10–60) |
lyceum infer result <request-id>
Fetch the result of an async chat request.
storage
lyceum storage ls [prefix]
List files in your bucket.
| Flag | Default | Description |
|---|---|---|
-n, --max <n> | 1000 | Maximum number of files to fetch |
-r, --recursive | off | List all files recursively |
lyceum storage load <local-path>
Upload a file or directory to your bucket.
| Flag | Default | Description |
|---|---|---|
-k, --key <path> | filename or directory name | Remote path/key inside the bucket |
-r, --recursive | off | Upload a directory recursively |
-f, --force | off | Skip confirmation for directory uploads |
lyceum storage download <remote-path>
Download a file from your bucket.
| Flag | Default | Description |
|---|---|---|
-o, --output <path> | filename | Local output path |
lyceum storage rm <remote-path>
Delete a single file.
| Flag | Default | Description |
|---|---|---|
-f, --force | off | Skip confirmation |
lyceum storage rmdir <folder-prefix>
Delete every file under a prefix.
| Flag | Default | Description |
|---|---|---|
-f, --force | off | Skip confirmation |
vm
lyceum vm start
Provision a new VM instance.
| Flag | Default | Description |
|---|---|---|
-h, --hardware-profile <profile> | a100 | Hardware profile (cpu, a100, h100, …) |
-k, --key <key> | required | SSH public key for VM access |
-g, --gpu-count <n> | 1 | Number of GPUs |
-a, --async | off | Return immediately without waiting for the VM to be ready |
lyceum vm list
List your VMs. By default, includes provisioning, ready, failed, and terminated VMs; toggle each with the corresponding flag.
| Flag | Default | Description |
|---|---|---|
-r/-R, --ready/--no-ready | on | Include fully operational VMs |
-f/-F, --failed/--no-failed | on | Include failed VMs |
-t/-T, --terminated/--no-terminated | on | Include terminated VMs |
lyceum vm status <vm-id>
Get detailed status for a VM, including IP and connection info.
lyceum vm availability
Check what hardware profiles are currently available to provision.
lyceum vm terminate <vm-id>
Terminate a VM.
| Flag | Default | Description |
|---|---|---|
-f, --force | off | Skip confirmation |

