Skip to main content
Run Python code on cloud infrastructure. Dependencies are automatically detected from your imports and installed.

Hello World

lyceum python run 'print("Hello from Lyceum Cloud!")'

Using Packages

Lyceum detects imports and installs packages automatically:
import numpy as np
import pandas as pd

data = np.random.rand(100, 3)
df = pd.DataFrame(data, columns=["a", "b", "c"])

print(df.describe())
No manual install needed — Lyceum detects import statements and installs packages before execution.

Specifying Requirements

Pin exact versions using a requirements file or inline:
lyceum python run script.py -r requirements.txt

Reading Files from Storage

Files uploaded to Lyceum storage are mounted in your execution environment:
import pandas as pd

# Read from storage
df = pd.read_csv('/lyceum/storage/data.csv')
print(f"Loaded {len(df)} rows")

# Save results back to storage
df.describe().to_csv('/lyceum/storage/summary.csv')
print("Results saved to storage")

Passing Script Arguments

Pass arguments to your script after --:
lyceum python run train.py -- --epochs 10 --batch-size 32
# train.py
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('--epochs', type=int, default=5)
parser.add_argument('--batch-size', type=int, default=16)
args = parser.parse_args()

print(f"Training for {args.epochs} epochs with batch size {args.batch_size}")

GPU Execution

Run on GPU hardware for compute-intensive tasks:
# A100 GPU
lyceum python run train.py -m gpu.a100

# H100 GPU
lyceum python run train.py -m gpu.h100
import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")

if torch.cuda.is_available():
    print(f"GPU: {torch.cuda.get_device_name(0)}")
Files saved to /lyceum/storage/ persist across executions and can be downloaded from the dashboard or VS Code extension.
Set reasonable timeouts to avoid runaway executions. The default timeout is 300 seconds.