Skip to main content
Every Lyceum account gets a dedicated S3-compatible bucket. It’s the right place for anything that needs to outlive a single run or VM: input datasets, trained model weights, intermediate artefacts, results.

Why use it instead of…

  • …Secrets — Secrets are short string values (tokens, URLs). Storage is for files of any size.
  • …A VM disk — VM disks are wiped on termination. The storage bucket persists across every VM and run on your account.
  • …Re-uploading on every run — A run that reads from the bucket starts immediately; one that uploads inputs to itself pays for the upload time on every invocation.

Access patterns

There are two ways to interact with the bucket:
  1. Through the Lyceum REST API — simple, single-file uploads/downloads, easy curl commands.
  2. Direct S3 with temporary credentials — fetch short-lived MinIO/S3 credentials from POST /storage/credentials and use any standard S3 client (boto3, aws-cli, mc, …). This is the right path for large files, parallel transfers, multipart uploads, and anything that benefits from a real S3 client library.
The temporary credentials returned by /storage/credentials are STS-style: an access key, secret key, session token, the bucket name, and the endpoint. They expire automatically — request fresh ones whenever you need them.

CLI

lyceum storage ls                                       # list files at the root
lyceum storage ls data/ -r                              # list a folder recursively
lyceum storage load local-file.csv                      # upload
lyceum storage load local-file.csv --key data/x.csv     # upload to a specific path
lyceum storage load ./local-folder -r                   # upload a directory
lyceum storage download path/in/bucket/file.csv         # download
lyceum storage download path/in/bucket/file.csv -o ./   # download to a specific path
lyceum storage rm path/in/bucket/file.csv               # delete a file
lyceum storage rmdir old-data/                          # delete a folder
--key controls the destination key inside the bucket; without it the file is uploaded under its local name.

REST API

MethodEndpointPurpose
GET/storage/list-filesList files (optional prefix, max_files)
POST/storage/uploadUpload a single file (multipart form, optional key query)
POST/storage/upload-bulkUpload multiple files in one request
GET/storage/download/{file_key}Download a file
DELETE/storage/delete/{file_key}Delete a file
DELETE/storage/delete-folder/{folder_prefix}Delete every file under a prefix
POST/storage/credentialsGet temporary S3 credentials for direct access

Direct S3 access

POST /storage/credentials returns a StorageCredentials object with these fields:
FieldDescription
access_keyAccess key ID
secret_keySecret access key
session_tokenSTS session token
endpointS3-compatible endpoint URL
bucket_nameYour bucket name
regionBucket region
expires_atCredential expiry timestamp
import boto3, requests

api_key = "lk_..."

creds = requests.post(
    "https://api.lyceum.technology/api/v2/external/storage/credentials",
    headers={"Authorization": f"Bearer {api_key}"},
).json()

s3 = boto3.client(
    "s3",
    endpoint_url=creds["endpoint"],
    aws_access_key_id=creds["access_key"],
    aws_secret_access_key=creds["secret_key"],
    aws_session_token=creds["session_token"],
    region_name=creds["region"],
)

s3.upload_file("local.csv", creds["bucket_name"], "data/local.csv")
The same client works for download_file, list_objects_v2, multipart uploads, presigned URLs, and any other S3 operation.

Mounting storage inside runs

How your bucket is exposed depends on the execution type:
Execution typeMount behaviour
Docker (lyceum docker run)Bucket is mounted at /mnt/s3 by default. Disable with --no-s3, change the path with --s3-mount-path /your/path.
Docker Compose (lyceum compose run)Mount is off by default. Enable by setting enable_s3_mount: true in the API request.
Python (lyceum python run)The bucket is not mounted as a filesystem. Use the credentials endpoint above and any S3 client to read and write files.
VMsThe bucket is not auto-mounted. You can mount it yourself on the VM with s3fs, mc, rclone, or any S3 client using the credentials endpoint.
For Docker runs, files in /mnt/s3 map directly to objects in your bucket — reading a file fetches the object, writing creates or replaces it.