What You Get
Run a single command and Pythia will:- Profile your code on all available GPU types
- Predict memory requirements to identify compatible hardware
- Estimate runtime for each GPU option
- Recommend the best GPU based on your workload
CLI Usage
Run GPU Selection
--:
Options
| Option | Description |
|---|---|
-f, --file-name | Name for the execution |
-t, --timeout | Timeout per sub-job in seconds (1-600, default: 60) |
-r, --requirements | Requirements file path or pip requirements string |
--import | Pre-import modules (can be used multiple times) |
--no-config | Skip workspace config from .lyceum/config.json |
-d, --debug | Show detailed debug information |
Check Status
Requirements
Supported code
Supported code
- Single Python script as entry point (executed as main)
- Must target CUDA/GPU as the PyTorch device
- Command-line arguments are supported
- PyTorch and PyTorch Lightning
- Imported models (Hugging Face, torchvision, timm)
- Single model, single GPU
Not supported
Not supported
- Jupyter notebooks (not supported at this point)
- Advanced or custom gradient manipulation
- CUDA extensions that bypass PyTorch tensors
Prediction Details
Memory prediction
Memory prediction
Predicts training and inference memory for a single model. Accounts for gradients, model weights, activations, optimizers, and mixed-precision.Does not account for data transfer outside the training/inference loop or operator memory overhead (intermediate tensors).
Runtime prediction
Runtime prediction
Estimates inference/training loop time based on iteration time.Does not account for VM startup time or initial data downloading and loading.

