Compare GPU cloud pricing.
Add your workloads and see exactly how much you could save by switching to Lyceum.
Add your GPU workloads.
Select your current provider, GPU type, and runtime. Click any cell to edit values.
Lyceum GPU pricing
On-demand pricing with per-second billing. No hidden fees, no egress charges.
| GPU | VRAM | Best for | Price/hour |
|---|---|---|---|
| NVIDIA B200 | 192 GB | Large-scale training, frontier models | $4.29 |
| NVIDIA H200 | 141 GB | Training, large inference workloads | $3.19 |
| NVIDIA H100 Popular | 80 GB | Training, fine-tuning, high-throughput inference | $2.49 |
| NVIDIA A100 | 80 GB | Fine-tuning, medium inference, general ML | $1.39 |
| NVIDIA L40S | 48 GB | Inference, image/video generation | $1.05 |
| NVIDIA T4 | 16 GB | Lightweight inference, development, testing | $0.39 |
Common questions.
How much does an H100 GPU cost per hour?
H100 GPU pricing varies significantly by provider:
Which cloud GPU provider is cheapest?
The cheapest provider depends on your GPU type, commitment length, and workload pattern. For most GPU types, Lyceum offers competitive on-demand pricing without requiring long-term commitments. Use the calculator above to compare exact costs for your specific requirements.
What's the difference between serverless and on-demand?
Serverless: Pay per-second for actual compute time. No commitments, ideal for variable or bursty workloads.
On-demand: Reserve GPU instances by the hour. Best for longer training runs or consistent workloads.
Does this include egress and storage costs?
This calculator shows GPU compute costs only. Storage and egress fees vary by provider and can add 10-30% to total costs, especially on AWS and GCP. Lyceum includes egress in GPU pricing with no hidden fees.
Want to learn more?
Talk to our team about your GPU requirements and see how Lyceum can help.