Skip to main content
Lyceum offers GPU and CPU resources in two distinct contexts: serverless workloads (per-job execution via the CLI or API) and VM instances (long-running dedicated machines launched from the dashboard or API). The available machine identifiers and pricing differ between the two.

Serverless Workloads

When running code through lyceum python run, lyceum docker run, lyceum compose run, or lyceum notebook, select the underlying hardware with the -m / --machine flag.
# CPU (default)
lyceum python run script.py

# GPU options
lyceum python run train.py -m gpu.a100
lyceum docker run pytorch/pytorch:latest -m gpu.h100 -c "python train.py"
lyceum compose run docker-compose.yml -m gpu.a100
The default is cpu. Common GPU values include gpu.a100, gpu.h100, gpu.b200, and others depending on your account quota. The bare value gpu selects an NVIDIA T4.
Available machine types are gated per account. The CLI validates your selection against /api/v2/external/user/quotas/available-hardware before submitting the job. To see the machines you have access to, run any execution command with an unavailable type — the CLI will print the list.

VM Instances

When launching dedicated VMs via the dashboard or the VMs API, the following GPU profiles are available:
GPUVRAMRAMvCPUPeak TFLOPS
B300288 GB240 GB32720
B200192 GB180 GB28540
H200141 GB200 GB1667
H10080 GB180 GB2067
A10080 GB120 GB1619.5
L40S48 GB128 GB1291.6
Each profile can be launched with 1, 2, 4, or 8 GPUs per instance, subject to availability and account limits. For current pricing and committed-term discounts, see the dashboard launch page.

Storage

All machine types — serverless and VM — have access to your cloud storage. See Storage for mount paths and usage.