Skip to main content

Video Tutorial

Execute any OCI-compatible Docker image on Lyceum Cloud. Bring your own containers with custom dependencies, ML frameworks, or specialized environments.

Overview

Docker execution allows you to run containerized workloads on Lyceum’s infrastructure. Perfect for reproducible environments, custom dependencies, and production deployments.

Any Registry

Docker Hub, ECR, or private registries

GPU Support

Automatic NVIDIA runtime for ML workloads

Real-time Output

Stream logs and results as they happen

Quick Start

Run a public Docker Hub image:
curl -X POST https://api.lyceum.technology/api/v2/external/execution/image/start \
  -H "Authorization: Bearer <TOKEN>" \
  -H "Content-Type: application/json" \
  -d '{
    "docker_image_ref": "python:3.11-slim",
    "docker_run_cmd": ["python", "-c", "print(\"Hello from Docker!\")"],
    "execution_type": "cpu"
  }'

Container Registries

Public Images

No authentication required for public images:
{
  "docker_image_ref": "nginx:latest"
}

Private Images

Use Docker Hub credentials:
{
  "docker_image_ref": "myuser/private-image:tag",
  "docker_registry_credential_type": "basic",
  "docker_registry_credentials": {
    "username": "dockerhub-username",
    "password": "dockerhub-token"
  }
}

Configuration

{
  "docker_image_ref": "245706660333.dkr.ecr.eu-north-1.amazonaws.com/myapp:latest",
  "docker_registry_credential_type": "aws",
  "docker_registry_credentials": {
    "region": "eu-north-1",
    "aws_access_key_id": "AKIAIOSFODNN7EXAMPLE",
    "aws_secret_access_key": "wJalrXUtnFEMI/K7MDENG",
    "aws_session_token": "AQoDYXdzEPT//" // optional
  }
}
ECR credentials are automatically refreshed if they expire during long-running executions

API Reference

Start Execution

Endpoint: POST /api/v2/external/execution/image/start
import requests

response = requests.post(
    "https://api.lyceum.technology/api/v2/external/execution/image/start",
    headers={"Authorization": f"Bearer {token}", "Content-Type": "application/json"},
    json={
        "docker_image_ref": "python:3.11",
        "docker_run_cmd": ["python", "script.py"],
        "docker_run_env": "PYTHONUNBUFFERED=1\nMY_VAR=value",
        "execution_type": "cpu",
        "timeout": 300
    }
)

execution_id = response.json()["execution_id"]
print(f"Started execution: {execution_id}")

Request Parameters

docker_image_ref
string
required
Fully qualified Docker image reference (e.g., python:3.11, myregistry.com/image:tag)
execution_type
string
default:"cpu"
Resource type: cpu, gpu, or auto
docker_run_cmd
array
Override container command (e.g., ["python", "script.py"])
docker_run_env
string
Environment variables as newline-separated KEY=VALUE pairs
timeout
integer
default:"300"
Maximum execution time in seconds
docker_registry_credential_type
string
Authentication type: basic or aws
docker_registry_credentials
object
Registry authentication credentials (see examples above)
user_callback_url
string
URL where execution output will be streamed in real-time

User Callbacks

Stream execution output directly to your own endpoint by providing a user_callback_url. Lyceum will send real-time output to your URL as the container runs.
response = requests.post(
    "https://api.lyceum.technology/api/v2/external/execution/image/start",
    headers={"Authorization": f"Bearer {token}", "Content-Type": "application/json"},
    json={
        "docker_image_ref": "python:3.11",
        "docker_run_cmd": ["python", "train.py"],
        "execution_type": "gpu",
        "user_callback_urls": "https://your-server.com/webhook/lyceum"
    }
)
Your endpoint will receive POST requests with the execution output as the container runs. This is useful for:
  • Integrating with your own logging systems
  • Triggering downstream workflows based on output
  • Building custom monitoring dashboards
  • Storing execution logs in your own infrastructure
Ensure your callback endpoint is publicly accessible and can handle multiple POST requests during execution.

Environment Configuration

Pass environment variables to your container:
env_vars = """
PYTHONUNBUFFERED=1
DATABASE_URL=postgresql://user:pass@host/db
API_KEY=your-secret-key
DEBUG=true
"""

response = requests.post(
    url,
    json={
        "docker_image_ref": "myapp:latest",
        "docker_run_env": env_vars
    }
)

Common Use Cases

ML Training

# PyTorch training job
{
  "docker_image_ref": "pytorch/pytorch:latest",
  "docker_run_cmd": ["python", "train.py"],
  "execution_type": "gpu",
  "timeout": 7200,
  "docker_run_env": "CUDA_VISIBLE_DEVICES=0"
}

Data Processing

# ETL pipeline
{
  "docker_image_ref": "apache/spark:3.5.0",
  "docker_run_cmd": ["spark-submit", "etl.py"],
  "execution_type": "cpu",
  "docker_run_env": "SPARK_MASTER=local[*]"
}

Web Scraping

# Selenium automation
{
  "docker_image_ref": "selenium/standalone-chrome",
  "docker_run_cmd": ["python", "scraper.py"],
  "execution_type": "cpu",
  "timeout": 1800
}

API Services

# FastAPI service
{
  "docker_image_ref": "tiangolo/uvicorn-gunicorn:python3.11",
  "docker_run_cmd": ["uvicorn", "main:app"],
  "execution_type": "cpu",
  "docker_run_env": "PORT=8000"
}

Monitoring Execution

Real-time Output Streaming

import sseclient

# Start execution and get callback URL
response = requests.post(...)
callback_url = response.json()["callback_url"]

# Stream output
messages = sseclient.SSEClient(callback_url)
for msg in messages:
    if msg.data:
        print(msg.data)

Best Practices

  • Use slim base images (alpine, slim variants)
  • Multi-stage builds to reduce size
  • Cache dependencies in layers
  • Remove unnecessary files and packages
# Good: Multi-stage build
FROM python:3.11 AS builder
COPY requirements.txt .
RUN pip install --user -r requirements.txt

FROM python:3.11-slim
COPY --from=builder /root/.local /root/.local
COPY . .
CMD ["python", "app.py"]
  • Never hardcode secrets in images
  • Use environment variables for credentials
  • Scan images for vulnerabilities
  • Use specific tags, not latest
# Good: Pass secrets via environment
{
    "docker_run_env": f"API_KEY={os.environ['API_KEY']}"
}

# Bad: Hardcoded in image
# RUN echo "API_KEY=secret" >> .env
  • Pre-pull large images to reduce startup time
  • Use appropriate resource allocation
  • Set reasonable timeouts
  • Implement health checks
# Optimize for quick starts
{
    "docker_image_ref": "python:3.11-alpine",  # Small image
    "execution_type": "auto",  # Let platform optimize
    "timeout": 300  # Reasonable limit
}

Troubleshooting

Common issues and solutions:
Issue: Container image not found or authentication failedSolutions:
  • Verify image name and tag exist
  • Check registry credentials
  • Ensure image is compatible with linux/amd64
# Test locally first
docker pull your-image:tag
Need help? Contact [email protected] or check our API Reference for detailed specifications.