Docker Execution
Any container.
Any GPU.
Full environment control. Per-second billing. Your containers, running on the exact hardware you need.
$ lyceum docker run myregistry/train:v2 \
--machine gpu.h100.8x \
--volume ./data:/data
▸ Pulling myregistry/train:v2...
▸ Image size: 4.2 GB
▸ Provisioning 8x H100 cluster...
▸ Mounting volumes...
Container running on 8x H100
[rank 0] Initializing distributed training...
[rank 0] Model loaded: 70B parameters
[rank 0] Starting epoch 1/100...
Full control. Here's why.
Any Docker image
If it runs in Docker, it runs on Lyceum. No modifications needed.
Per-second billing
Pay only for what you use. A 4-minute job costs 4 minutes.
Instant availability
GPUs provision in seconds. No waiting in queues.
train:v2
4.2 GB image
Your container active
CUDA 12.1
8× H100 640 GB
Available GPUs
T4 to B200. Multi-GPU and multi-node available.
| GPU | VRAM | Price/hr |
|---|---|---|
| NVIDIA B200 | 192 GB | $5.89 |
| NVIDIA H200 | 141 GB | $3.69 |
| NVIDIA H100 Popular | 80 GB | $3.29 |
| NVIDIA A100 | 80 GB | $1.99 |
| NVIDIA L40S | 48 GB | $1.49 |
Want something simpler?
For rapid prototyping with just Python scripts, try Python execution. Zero config, automatic dependency detection, instant iteration.
Explore Python executionRun your first GPU job.
No credit card required. Request access to get started.