Documentation Index Fetch the complete documentation index at: https://docs.lyceum.technology/llms.txt
Use this file to discover all available pages before exploring further.
1. Create an account
Sign up at the Lyceum Cloud Dashboard . New accounts receive starter credits you can use immediately.
CLI
VS Code Extension
REST API
Install the CLI The CLI is published as the lyceum binary. Install with pip: Log in You’ll be prompted for your email and password. The command exchanges those for a JWT token stored locally. For long-running scripts and CI, generate an API key instead — see API Keys . Run your first job Run a one-liner on the platform: lyceum python run "print('Hello from Lyceum')"
Run a Python file with a specific machine type: lyceum python run script.py --machine cpu
Install the extension
Open VS Code
Open the Extensions view (Cmd/Ctrl+Shift+X)
Search for Lyceum Cloud
Install the official extension published by lyceumtechnology
Authenticate
Open any .py or .ipynb file
Click the cloud icon in the editor toolbar
The extension opens the dashboard for sign-in and redirects back to VS Code
Run code With a Python file open, click the cloud icon (or run the Lyceum Cloud: Execute on Cloud command). The current file is submitted as a run and streamed back into VS Code. Get an API key
Open the Lyceum Cloud Dashboard
Go to API Keys
Click New API Key , give it a name, optionally set an expiration
Copy the key (starts with lk_) — it’s only shown once
API keys grant full access to your account. Store them in a secret manager and never commit them to source control.
Make a request The API base is https://api.lyceum.technology/api/v2/external. Authentication is Authorization: Bearer <key>. curl https://api.lyceum.technology/api/v2/external/billing/credits \
-H "Authorization: Bearer lk_your_api_key"
See the API Reference for the full endpoint list.
3. Where to next
Launch a run Submit Python or Docker workloads to GPU machines.
Launch an instance Provision a dedicated GPU VM with SSH access.
Deploy a model Stand up a Hugging Face model behind an OpenAI-compatible endpoint.
Upload files Use your per-user S3 bucket for inputs and outputs.