GPU Selection Commands
Run your code on multiple GPUs to determine optimal hardware based on memory requirements, runtime performance, and cost.Commands
| Command | Description |
|---|---|
lyceum gpu-selection run | Analyze code and find the optimal GPU |
lyceum gpu-selection status | Check status of a GPU selection job |
lyceum gpu-selection run
Submits your code to run on all GPU profiles available to your account, then returns which GPU performed best.Arguments
| Argument | Description |
|---|---|
code_or_file | (required) Python code to execute or path to a Python file |
Options
| Option | Description |
|---|---|
--file-name, -f | Name for the execution |
--timeout, -t | Timeout per sub-job in seconds (1-600). Default: 60 |
--requirements, -r | Requirements file path or pip requirements string |
--import | Pre-import modules (can be used multiple times) |
--use-config/--no-config | Use workspace config from .lyceum/config.json if available. Default: enabled |
--optimize, -o | Optimization strategy: cost (cheapest), speed (fastest), or util (highest utilization). Default uses API recommendation |
--debug, -d | Show detailed debug information |
Script Arguments
Pass arguments to your script using-- separator:
Examples
Output
The command provides:- Memory Breakdown: Shows parameter count, model weights, gradients, optimizer states, and activations
- Compatible GPUs: Lists all GPUs that can run your workload with VRAM requirements and utilization
- Runtime Prediction: Estimated execution time and cost for each GPU
- Recommendation: The best GPU based on your optimization strategy
lyceum gpu-selection status
Check the status of a previously submitted GPU selection job.Arguments
| Argument | Description |
|---|---|
execution_id | (required) Execution ID to check |
Options
| Option | Description |
|---|---|
--optimize, -o | Optimization strategy for displaying results: cost, speed, or util |
Examples
Supported GPUs
GPU selection analyzes performance across these GPU types:| GPU | VRAM | Use Case |
|---|---|---|
| T4 | 16 GB | Development, small models |
| L40S | 48 GB | Medium models, inference |
| A100 (40GB) | 40 GB | Training, large models |
| A100 (80GB) | 80 GB | Large models, multi-GPU |
| H100 | 80 GB | Fastest training, LLMs |
| H200 | 141 GB | Largest models |
Requirements
Your code must use PyTorch or the HuggingFace ecosystem for GPU selection to analyze memory and performance requirements. The analysis works best with:- Models defined as
nn.Modulesubclasses - GPU operations using
.to('cuda')ordevice = torch.device('cuda') - Training loops with
loss.backward()andoptimizer.step()

