Skip to main content
The Lyceum CLI is a Python-based command-line tool that wraps the REST API. Every dashboard feature except inference deployment management is accessible from it.

Install

pip install lyceum-cli
Requires Python 3.8 or newer. The package installs a single binary, lyceum.

Authenticate

lyceum auth login
Logs you in via the dashboard browser flow and stores the token locally. For non-interactive environments, pass --manual to be prompted for tokens directly, or supply an API key with --api-key lk_....

Command groups

GroupPurpose
authSign in, sign out, check status
pythonSubmit Python runs
dockerSubmit Docker image runs
composeSubmit Docker Compose runs
gpu-selectionFan out runs across GPU types
notebookLaunch and manage Jupyter notebook sessions
workloadsList, abort, and view history of runs
inferDeploy models, chat with deployments, manage them
storageManage files in your storage bucket
vmProvision and manage GPU virtual machines

Global flags

FlagDescription
--version, -vPrint the CLI version

auth

lyceum auth login

Sign in to Lyceum Cloud.
FlagDescription
--url <base>Override the API base URL (for development)
--dashboard-url <url>Override the dashboard URL
--manualUse manual token entry instead of the browser flow
--api-key <key>Sign in directly with an API key

lyceum auth logout

Clear stored credentials.

lyceum auth status

Show the current authentication status.

python

lyceum python run <code-or-file>

Execute a Python snippet or file on Lyceum Cloud.
FlagDefaultDescription
-m, --machine <type>cpuMachine type (cpu, a100, h100, …)
-f, --file-name <name>Name for the execution
-r, --requirements <path>Requirements file path or pip requirements string
--import <module>Pre-import a module (repeatable)
--use-config / --no-configuseRead .lyceum/config.json workspace config
-d, --debugoffShow debug information about config, requirements, and payload
lyceum python run script.py -m gpu.a100 -r requirements.txt

lyceum python config init|show|refresh

Manage the workspace config at .lyceum/config.json. init creates one, show prints the current config, refresh regenerates it.

docker

lyceum docker run <image>

Run a Docker container on Lyceum Cloud.
FlagDefaultDescription
-m, --machine <type>cpuMachine type
-t, --timeout <sec>300Execution timeout in seconds
-f, --file-name <name>Name for the execution
-c, --command <cmd>Command to run in the container
-e, --env <KEY=VAL>Environment variable (repeatable)
-d, --detachoffRun container in background and print execution ID
--callback <url>Webhook URL for completion notification
--registry-creds <json>Registry credentials as a JSON string
--registry-type <type>Registry credential type (basic, aws)
--s3 / --no-s3onMount your storage bucket inside the container
--s3-mount-path <path>/mnt/s3Where to mount the bucket inside the container
--graceful-timeout <sec>10Seconds to wait for graceful shutdown on cancel
lyceum docker run python:3.11-slim -c "python -c 'print(1+1)'" -m cpu

lyceum docker logs <execution-id>

Stream logs from a Docker run.

lyceum docker registry-examples

Print example registry credential payloads.

compose

lyceum compose run <compose-file>

Run a Docker Compose stack on Lyceum Cloud.
FlagDefaultDescription
-m, --machine <type>cpuHardware profile
-t, --timeout <sec>300Execution timeout in seconds
-f, --file-name <name>Display name for the run
-d, --detachoffSubmit and return immediately
--callback <url>Webhook URL for completion notification
--registry-creds <json>Registry credentials as a JSON string
--registry-type <type>Registry credential type (basic, aws)
--graceful-timeout <sec>10Seconds to wait for graceful shutdown on cancel

lyceum compose logs <execution-id>

Stream logs from a Compose run.

lyceum compose registry-examples

Print example registry credential payloads.

gpu-selection

lyceum gpu-selection run <code-or-file>

Submit Python code that fans out across GPU types so you can compare them.
FlagDefaultDescription
-f, --file-name <name>Display name for the parent run
-t, --timeout <sec>60Per-sub-job timeout (1–600)
-r, --requirements <path>Path to a requirements.txt
--import <module>Pre-import a module (repeatable)
--use-config / --no-configuseRead .lyceum/config.json
--optimize <metric>Optimisation objective for selection
-d, --debugoffShow debug information

lyceum gpu-selection status <execution-id>

Get the parent run status and per-sub-job results.

notebook

lyceum notebook launch

Launch a Jupyter notebook server on Lyceum Cloud. Returns a URL you can open in a browser.
FlagDefaultDescription
-m, --machine <type>cpuHardware profile
-t, --timeout <sec>600Session timeout (max 600)
-i, --image <ref>jupyter/base-notebook:latestCustom Jupyter image
--token <token>lyceumJupyter notebook token
-p, --port <port>8888Port for the Jupyter server

lyceum notebook list

List notebook sessions.

lyceum notebook stop <execution-id>

Stop a running notebook session.

workloads

lyceum workloads list

List your runs.
FlagDefaultDescription
-n, --limit <n>10Number of executions to show

lyceum workloads abort <execution-id>

Hard-stop a run. The run is marked aborted.

lyceum workloads history

Show recent execution history.
FlagDefaultDescription
-n, --limit <n>10Number of executions to show

infer

lyceum infer deploy <hf-model-id>

Deploy a Hugging Face model as a dedicated inference endpoint.
FlagDefaultDescription
-g, --gpu <profile>gpu.a100Hardware profile
-t, --hf-token <token>Hugging Face token (for gated models)
--min-replicas <n>1Minimum replicas to keep running
--max-replicas <n>1Maximum replicas allowed
--target-rps <rps>10.0Target requests/sec per replica for scale-up
--target-latency <ms>5000.0Target p95 latency in milliseconds for scale-up
--stabilisation <sec>300Scale-down stabilisation window
-w, --waitoffBlock until the deployment has healthy replicas
lyceum infer deploy meta-llama/Llama-3.1-8B-Instruct -g gpu.a100 --min-replicas 1 --max-replicas 3

lyceum infer status <deployment-id>

Get the status of a deployment.
FlagDefaultDescription
-a, --alloffInclude stopped deployments

lyceum infer stop <deployment-id>

Stop a deployment.

lyceum infer models

List available models.
FlagDefaultDescription
-a, --alloffInclude stopped deployments

lyceum infer chat

Send a chat completion to a deployed model.
FlagDefaultDescription
-d, --deployment <id>Deployment ID to target
-m, --model <id>Alias for --deployment
-p, --prompt <text>Message text or path to a .txt/.yaml/.xml file
-i, --image <path>Image file path (for multimodal models)
--image-url <url>Image URL (for multimodal models)
-s, --system <text>System message
-t, --tokens <n>1000Max output tokens
--temperature <t>0.7Sampling temperature
-a, --asyncoffSubmit async, return request ID immediately
--timeout <sec>60Request timeout (10–60)

lyceum infer result <request-id>

Fetch the result of an async chat request.

storage

lyceum storage ls [prefix]

List files in your bucket.
FlagDefaultDescription
-n, --max <n>1000Maximum number of files to fetch
-r, --recursiveoffList all files recursively

lyceum storage load <local-path>

Upload a file or directory to your bucket.
FlagDefaultDescription
-k, --key <path>filename or directory nameRemote path/key inside the bucket
-r, --recursiveoffUpload a directory recursively
-f, --forceoffSkip confirmation for directory uploads

lyceum storage download <remote-path>

Download a file from your bucket.
FlagDefaultDescription
-o, --output <path>filenameLocal output path

lyceum storage rm <remote-path>

Delete a single file.
FlagDefaultDescription
-f, --forceoffSkip confirmation

lyceum storage rmdir <folder-prefix>

Delete every file under a prefix.
FlagDefaultDescription
-f, --forceoffSkip confirmation

vm

lyceum vm start

Provision a new VM instance.
FlagDefaultDescription
-h, --hardware-profile <profile>a100Hardware profile (cpu, a100, h100, …)
-k, --key <key>requiredSSH public key for VM access
-g, --gpu-count <n>1Number of GPUs
-a, --asyncoffReturn immediately without waiting for the VM to be ready
lyceum vm start -h h100 -k "$(cat ~/.ssh/id_ed25519.pub)" -g 1

lyceum vm list

List your VMs. By default, includes provisioning, ready, failed, and terminated VMs; toggle each with the corresponding flag.
FlagDefaultDescription
-r/-R, --ready/--no-readyonInclude fully operational VMs
-f/-F, --failed/--no-failedonInclude failed VMs
-t/-T, --terminated/--no-terminatedonInclude terminated VMs

lyceum vm status <vm-id>

Get detailed status for a VM, including IP and connection info.

lyceum vm availability

Check what hardware profiles are currently available to provision.

lyceum vm terminate <vm-id>

Terminate a VM.
FlagDefaultDescription
-f, --forceoffSkip confirmation