Skip to main content
VMs are dedicated, long-lived machines you SSH into. Use them for training jobs that span hours, interactive development, or anything that doesn’t fit a one-shot serverless run.
Provisioning typically takes 1–3 minutes. The CLI polls automatically; the API exposes GET /vms/{vm_id}/status so you can poll yourself.

Quick Start with the CLI

# Authenticate (one-time)
lyceum auth login

# See what hardware is available right now
lyceum vm availability

# Start an A100 instance
lyceum vm start \
  -h a100 \
  -k "$(cat ~/.ssh/id_ed25519.pub)"

# CLI waits for the VM to become ready and prints its IP.
# Then SSH in (the username depends on the image — see vm status output).
ssh root@<ip>
To launch with multiple GPUs, add -g 2 (or 4, 8 if the profile supports it). To return immediately without waiting, add -a / --async. For the full set of available profiles and flags see the VM page and Launch an Instance.

ML Training Workflow

1

Start the VM

lyceum vm start -h a100 -k "$(cat ~/.ssh/id_ed25519.pub)"
2

Connect and set up the environment

ssh root@<ip>

# On the VM
nvidia-smi                          # verify GPU is visible
git clone https://github.com/your-org/ml-project.git
cd ml-project
python -m venv venv && source venv/bin/activate
pip install -r requirements.txt
3

Run training inside tmux

tmux new -s training
python train.py --epochs 100 --batch-size 32
# Detach with Ctrl+B, D — reattach later with: tmux attach -t training
4

Pull results and terminate

# From your local machine
scp root@<ip>:~/ml-project/model.pt ./

# Tear the VM down
lyceum vm terminate <vm_id> -f
Local disk on a VM is wiped on termination. Anything you want to keep should be scp’d off, pushed to git, or written to your storage bucket before terminating.

API Examples

The CLI is a thin wrapper around the VMs API. Use the API directly when you need to integrate provisioning into your own tooling.

Provision and wait

import requests
import time

BASE_URL = "https://api.lyceum.technology"
TOKEN = "your-token"

def create_vm(public_key: str, hardware_profile: str = "a100", gpu_count: int = 1):
    """Create a new VM instance."""
    payload = {
        "user_public_key": public_key,
        "hardware_profile": hardware_profile,
        "instance_specs": {"gpu_count": gpu_count},
    }
    r = requests.post(
        f"{BASE_URL}/api/v2/external/vms/create",
        headers={"Authorization": f"Bearer {TOKEN}"},
        json=payload,
    )
    r.raise_for_status()
    return r.json()

def wait_for_ready(vm_id: str, timeout: int = 600) -> dict:
    """Poll until the VM is ready or fails."""
    start = time.time()
    while time.time() - start < timeout:
        r = requests.get(
            f"{BASE_URL}/api/v2/external/vms/{vm_id}/status",
            headers={"Authorization": f"Bearer {TOKEN}"},
        )
        r.raise_for_status()
        status = r.json()
        if status["status"] in ("ready", "running"):
            return status
        if status["status"] in ("failed", "error"):
            raise RuntimeError(f"VM failed: {status}")
        print(f"Status: {status['status']}...")
        time.sleep(20)
    raise TimeoutError("VM provisioning timed out")

with open("/Users/me/.ssh/id_ed25519.pub") as f:
    public_key = f.read().strip()

vm = create_vm(public_key, hardware_profile="a100", gpu_count=1)
print(f"Created VM: {vm['vm_id']}")

ready = wait_for_ready(vm["vm_id"])
print(f"VM ready! IP: {ready['ip_address']}")

List VMs

def list_vms():
    r = requests.get(
        f"{BASE_URL}/api/v2/external/vms/list",
        headers={"Authorization": f"Bearer {TOKEN}"},
    )
    r.raise_for_status()
    return r.json().get("vms", [])

for vm in list_vms():
    name = vm.get("name") or "-"
    print(f"{vm['vm_id']:38}  {vm['status']:10}  {vm.get('hardware_profile', '-'):10}  {name}")

Check availability

def check_availability():
    r = requests.get(
        f"{BASE_URL}/api/v2/external/vms/availability",
        headers={"Authorization": f"Bearer {TOKEN}"},
    )
    r.raise_for_status()
    return r.json().get("available_hardware_profiles", [])

for profile in check_availability():
    print(f"{profile['hardware_profile']}: ${profile.get('price_per_hour', 0):.2f}/GPU/hr")

Common Patterns

tmux new -s training
python train.py --epochs 1000
# Detach: Ctrl+B, D
# Reattach: tmux attach -t training
# Upload
scp ./data.tar.gz root@<ip>:~/

# Download
scp root@<ip>:~/results/* ./local-results/

# Sync directories
rsync -avz ./project/ root@<ip>:~/project/
# Jupyter on 8888
ssh -L 8888:localhost:8888 root@<ip>

# Multiple ports
ssh -L 8888:localhost:8888 -L 6006:localhost:6006 root@<ip>
VMs bill from the moment they enter running until you terminate them, regardless of whether anything is executing. Run lyceum vm list periodically to make sure you don’t have idle instances.