Orchestration

One platform for all your GPUs.

Connect on-premise clusters, cloud accounts, and Lyceum Cloud. Pythia's AI scheduler eliminates OOMs and maximises utilisation.

Pythia Analysis
Complete
H100 80GB
2h 14m · $7.38 · 68% VRAM
Best fit
2
A100 80GB
3h 41m · $7.32 · 72% VRAM
Compatible
L40s 48GB
Insufficient VRAM
OOM risk
Recommended H100 (best cost/perf)
// Pythia

What we predict.

80GB

Memory usage

Predict VRAM requirements before execution. No more OOM surprises.

H100
2.1h
A100
3.4h

Runtime

Estimated job duration per GPU type based on workload analysis.

H100

GPU selection

Automatically match jobs to optimal hardware. Balance cost and speed.

On-prem
Cloud

Cloud burst

Overflow to cloud when on-prem is full. Automatic, cost-aware.

Unified Control

All your GPUs. One interface.

Connect your on-premise clusters, AWS/GCP/Azure accounts, and Lyceum Cloud. Submit jobs once, let Pythia find the best place to run them.

  • Single dashboard for all resources
  • Unified job queue across clusters
  • Automatic failover between providers
Connected Resources All healthy
On-premise DC1
32 H100s
AWS
AWS eu-west-1
On-demand A100s
LC
Lyceum Cloud
Burst capacity
pythia-analysis
# Submit a training job
$ lyceum gpu-selection run train.py
[Pythia] Analyzing workload...
├─ Detected: PyTorch DDP training
├─ Model: 7B parameters
├─ Estimated VRAM: 54GB peak
└─ Runtime estimate: 2h 14m
[Pythia] Selecting optimal GPU...
├─ H100 80GB: $7.38 (best)
├─ A100 80GB: $7.32
└─ L40S 48GB: ❌ OOM risk
✓ Dispatched to on-prem H100 cluster
AI Scheduling

Pythia picks the perfect GPU

Pythia analyzes your code to predict memory usage, runtime, and cost. It automatically selects the optimal GPU and location, eliminating OOMs and wasted spend.

  • VRAM prediction prevents OOM errors
  • Cost-optimized placement decisions
  • Learns from your workload history
// How it works

Connect, submit, run.

1
Connect your clusters
Link your on-prem infrastructure, cloud accounts, or use Lyceum Cloud. All three work together seamlessly.
On-premise AWS GCP Azure
2
Submit your jobs
Push jobs via CLI, SDK, or dashboard. Pythia analyzes your workload and selects the optimal GPU automatically.
lyceum gpu-selection run train.py
3
Monitor everything
Track utilisation, costs, and queues in real-time. Auto-burst to cloud when on-prem capacity is full.
Real-time metrics
0
OOM errors
30%
cost savings
95%
utilisation
<30m
setup time

Stop managing infrastructure.

Connect your first cluster in under 30 minutes.