Skip to main content

GPU Selection Commands

Run your code on multiple GPUs to determine optimal hardware based on memory requirements, runtime performance, and cost.

Commands

CommandDescription
lyceum gpu-selection runAnalyze code and find the optimal GPU
lyceum gpu-selection statusCheck status of a GPU selection job

lyceum gpu-selection run

Submits your code to run on all GPU profiles available to your account, then returns which GPU performed best.
lyceum gpu-selection run <code_or_file>

Arguments

ArgumentDescription
code_or_file(required) Python code to execute or path to a Python file

Options

OptionDescription
--file-name, -fName for the execution
--timeout, -tTimeout per sub-job in seconds (1-600). Default: 60
--requirements, -rRequirements file path or pip requirements string
--importPre-import modules (can be used multiple times)
--use-config/--no-configUse workspace config from .lyceum/config.json if available. Default: enabled
--optimize, -oOptimization strategy: cost (cheapest), speed (fastest), or util (highest utilization). Default uses API recommendation
--debug, -dShow detailed debug information

Script Arguments

Pass arguments to your script using -- separator:
lyceum gpu-selection run train.py -- --epochs 10 --lr 0.001

Examples

# Analyze a training script
lyceum gpu-selection run train.py

# Analyze with custom timeout
lyceum gpu-selection run train.py --timeout 120

# Optimize for cost
lyceum gpu-selection run train.py --optimize cost

# Optimize for speed
lyceum gpu-selection run train.py --optimize speed

# Pass script arguments
lyceum gpu-selection run train.py -- --batch-size 32 --epochs 5

Output

The command provides:
  • Memory Breakdown: Shows parameter count, model weights, gradients, optimizer states, and activations
  • Compatible GPUs: Lists all GPUs that can run your workload with VRAM requirements and utilization
  • Runtime Prediction: Estimated execution time and cost for each GPU
  • Recommendation: The best GPU based on your optimization strategy
After analysis, you’ll see a suggested run command:
▶ Run on Lyceum:

  lyceum python run train.py -m gpu.a100

lyceum gpu-selection status

Check the status of a previously submitted GPU selection job.
lyceum gpu-selection status <execution_id>

Arguments

ArgumentDescription
execution_id(required) Execution ID to check

Options

OptionDescription
--optimize, -oOptimization strategy for displaying results: cost, speed, or util

Examples

# Check job status
lyceum gpu-selection status exec_abc123

# Check status optimizing for cost
lyceum gpu-selection status exec_abc123 --optimize cost

Supported GPUs

GPU selection analyzes performance across these GPU types:
GPUVRAMUse Case
T416 GBDevelopment, small models
L40S48 GBMedium models, inference
A100 (40GB)40 GBTraining, large models
A100 (80GB)80 GBLarge models, multi-GPU
H10080 GBFastest training, LLMs
H200141 GBLargest models

Requirements

Your code must use PyTorch or the HuggingFace ecosystem for GPU selection to analyze memory and performance requirements. The analysis works best with:
  • Models defined as nn.Module subclasses
  • GPU operations using .to('cuda') or device = torch.device('cuda')
  • Training loops with loss.backward() and optimizer.step()