// About Lyceum

Making GPUs work for everyone.

We're building the infrastructure layer that makes GPU compute as easy to use as any other cloud service — starting in Europe.

Our Mission

Accessing GPU compute is still unnecessarily hard. Provisioning takes too long, pricing is opaque, and most teams either overpay for hardware they don't fully use or waste days managing infrastructure instead of building products.

Lyceum exists to fix that. We're building a single platform where any team — from a two-person AI startup to a large enterprise research lab — can access the exact compute they need in seconds, pay only for what they use, and never think about infrastructure again.

We believe GPU infrastructure should be as simple as: submit your code, get your results.

What We Do

Lyceum is a GPU cloud platform based in Berlin. We provide the full stack of compute infrastructure for AI workloads:

Inference

Serverless APIs and dedicated endpoints for deploying models into production. Pay per token or reserve dedicated capacity.

Training & Execution

Run Python scripts or Docker containers on GPUs without managing servers. Per-second billing, automatic hardware selection, no idle costs.

Infrastructure

On-demand GPU virtual machines with full root access, and large-scale clusters with InfiniBand for distributed training — from 1 to 8,000 GPUs.

Intelligent Orchestration

Our AI analyses workloads at the compiler level to predict runtime, memory requirements, and optimal GPU configuration before your job starts. No more overprovisioning, no more OOM errors.

Built in Europe. Built for Everyone.

We're headquartered in Berlin with GPU infrastructure across European data centres. That means GDPR compliance by default, EU data sovereignty, and low-latency access for European teams.

But this isn't just about geography. Europe's AI ecosystem is growing fast, and it needs compute infrastructure built to its standards — on data privacy, on transparency, on pricing. We're building that.

Map of Europe showing Lyceum locations
Berlin
Zurich
Paris
Office
Data Centre

The Team

Lyceum was founded by engineers and researchers who spent years working with GPU infrastructure and saw the same problems everywhere: teams overpaying for hardware, burning days on DevOps instead of research, and making GPU choices based on gut feeling rather than data.

We've worked across cloud infrastructure, compiler engineering, and machine learning — at places where we learned firsthand what it takes to run GPU compute at scale and what's broken about the current approach.

How We Work

Substance over hype.

The AI infrastructure space is full of noise. We focus on building things that actually work and solving real problems for real teams.

Transparent by default.

Per-second billing, upfront pricing, no hidden fees, no lock-in. If you can see exactly what you're paying for and why, we're doing our job.

Engineering depth.

Our core technology — workload profiling at the compiler level — isn't a marketing wrapper around commodity cloud. It's real research solving a real problem: how to match workloads to hardware automatically.

Small team, high standards.

We'd rather ship fewer things that work well than many things that kind of work.

Want to work with us?

We're always looking for exceptional engineers, researchers, and builders who care about infrastructure.