Execute any OCI-compatible Docker image on Lyceum infrastructure with GPU support and real-time log streaming.
Run a Public Image
lyceum docker run python:3.11-slim \
-c "python -c \"print('Hello from Docker!')\"" \
-m cpu
GPU Workload
lyceum docker run pytorch/pytorch:2.6.0-cuda12.4-cudnn9-runtime \
-c "python -c \"import torch; print(f'CUDA available: {torch.cuda.is_available()}')\"" \
-m a100 \
-t 600
Private Registry
Docker Hub (private)
AWS ECR
lyceum docker run myuser/private-image:latest \
--registry-type basic \
--registry-creds '{"username":"myuser","password":"dckr_pat_xxx"}'
lyceum docker run 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest \
--registry-type aws \
--registry-creds '{"region":"us-east-1","aws_access_key_id":"AKIA...","aws_secret_access_key":"..."}'
Environment Variables
lyceum docker run python:3.11 \
-c "python app.py" \
-e "DATABASE_URL=postgresql://host/db" \
-e "API_KEY=your-key" \
-m cpu
Detached Mode
Run in the background and check logs later:
# Start in detached mode
lyceum docker run python:3.11 -c "python long_job.py" -d -m cpu -t 3600
# Check running workloads
lyceum workloads list
# Stream logs
lyceum docker logs <execution_id>
# Abort if needed
lyceum workloads abort <execution_id>
Docker Compose
Run multi-container applications:
lyceum compose run docker-compose.yml -m a100 -t 600
# docker-compose.yml
services:
api:
image: python:3.11
command: python api.py
environment:
- PORT=8000
worker:
image: python:3.11
command: python worker.py
depends_on:
- api
Use smaller base images (alpine, slim variants) for faster startup. Container images are pulled fresh each execution.
Never hardcode secrets in Docker images. Pass them via environment variables using the -e flag or docker_run_env API parameter.
Containers automatically have access to your Lyceum storage. Files persist across executions.