| Authentication | /auth/login, /auth/refresh, /auth/api-keys/... | API Keys |
| MFA | TOTP enrolment, verification, backup codes | Settings |
| Streaming Execution | Submit Python runs, fetch status, abort | Launch a Run |
| Docker Execution | Submit Docker image runs | Launch a Run |
| Docker Compose Execution | Submit Compose stacks | Launch a Run |
| GPU Selection Execution | Fan out across GPU types | Runs |
| Workload Management | List, abort, stop runs | Runs |
| Execution Management | Get/delete a run, fetch timing | Runs |
| Observability - Logs | Loki-backed log queries | Logs |
| Observability - GPU Metrics | DCGM and system metrics per execution | GPU & System Metrics |
| Machine Types | Hardware catalogue and pricing | Launch a Run |
| User Quotas | Hardware profiles your account can use | Settings |
| Storage Files / Storage Credentials | Per-user S3 bucket | Storage |
| Environment Variables | Secrets injected into runs | Secrets |
| Dedicated Deployment External | Create, get, list, stop dedicated deployments | Dedicated Inference |
| Streaming Inference | SSE streaming for inference results | Streaming Inference |
| Batch API | OpenAI-compatible files and batches | — |
| Billing | Credits, history, invoices, vouchers | Billing |
| VMs | Provision and manage GPU virtual machines | Your VMs |