Published on

Dec 16, 2025

ML Research Intern – Runtime Prediction

Intern

Zurich / Berlin

About Lyceum

Lyceum is building a user-centric GPU cloud from the ground up. Our mission is to make high-performance computing seamless, accessible, and tailored to the needs of modern AI and ML workloads. We're not just deploying infrastructure, we’re designing and building our own large-scale GPU clusters from scratch. If you've ever wanted to help shape a cloud platform from day one, this is your moment.

The Role:
You’ll join our R&D team as a junior researcher working on runtime prediction, hardware selection, and workload efficiency.
You will support the design of experiments, help build models that predict resource requirements, and contribute to deploying them on our infrastructure to automate scheduling and cost prediction for customers. This role is ideal for a Master’s student, e.g. in the context of a thesis or research internship.

What we are working on

  • Runtime prediction models & scheduling heuristics

  • Benchmarking across LLMs, vision & multimodal models

  • Throughput, latency & stability optimisation at scale

  • Workload profiling (VRAM/compute/memory)

  • Reference pipelines, reproducible evaluation suites

  • Practical docs, baselines, and performance guidance

What We’re Looking For

  • Currently enrolled in a Master’s in CS/AI/ML (ideally working on or preparing a thesis in applied ML)

  • Solid fundamentals in model training & evaluation

  • First experience from research projects, a lab, or industry internships (Research Engineer/Assistant/Scientist)

  • Interest in model efficiency or GPU performance (quantization, pruning, large-scale training, profiling)

  • Ownership mindset and rigor in experimentation, even at junior level

  • Clear writing; reproducible results

  • Based in CH or open to relocating to Switzerland for the internship

  • Tech stack: Python, PyTorch/JAX (and/or TensorFlow). CUDA/GPU literacy is a plus.

Bonus Points

  • Experience with large-scale or distributed training (e.g. in a university or lab setting)

  • Exposure to dataset curation, evaluation design, or reproducibility practices

  • Publications, thesis work, or high-quality open-source contributions

Why Join Us
Build from zero: This is a rare opportunity to join a startup at the earliest stages and help shape not just the product, but the foundation of the company. You’ll have real ownership over your projects and the chance to grow into a full-time role.
Hard, meaningful problems: We’re tackling some of the most interesting challenges in cloud infrastructure, scheduling, and performance optimization, at the intersection of hardware and AI.World-class hardware: You’ll be working directly with cutting-edge GPU hardware and helping build the most performant compute platforms in Europe.

Everything else: Compensation, mentorship, team events etc – it’s our job to make sure you have everything you need to learn fast and do your best work!Lyceum is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Lyceum is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.


About Lyceum

Lyceum is building a user-centric GPU cloud from the ground up. Our mission is to make high-performance computing seamless, accessible, and tailored to the needs of modern AI and ML workloads. We're not just deploying infrastructure, we’re designing and building our own large-scale GPU clusters from scratch. If you've ever wanted to help shape a cloud platform from day one, this is your moment.

The Role:
You’ll join our R&D team as a junior researcher working on runtime prediction, hardware selection, and workload efficiency.
You will support the design of experiments, help build models that predict resource requirements, and contribute to deploying them on our infrastructure to automate scheduling and cost prediction for customers. This role is ideal for a Master’s student, e.g. in the context of a thesis or research internship.

What we are working on

  • Runtime prediction models & scheduling heuristics

  • Benchmarking across LLMs, vision & multimodal models

  • Throughput, latency & stability optimisation at scale

  • Workload profiling (VRAM/compute/memory)

  • Reference pipelines, reproducible evaluation suites

  • Practical docs, baselines, and performance guidance

What We’re Looking For

  • Currently enrolled in a Master’s in CS/AI/ML (ideally working on or preparing a thesis in applied ML)

  • Solid fundamentals in model training & evaluation

  • First experience from research projects, a lab, or industry internships (Research Engineer/Assistant/Scientist)

  • Interest in model efficiency or GPU performance (quantization, pruning, large-scale training, profiling)

  • Ownership mindset and rigor in experimentation, even at junior level

  • Clear writing; reproducible results

  • Based in CH or open to relocating to Switzerland for the internship

  • Tech stack: Python, PyTorch/JAX (and/or TensorFlow). CUDA/GPU literacy is a plus.

Bonus Points

  • Experience with large-scale or distributed training (e.g. in a university or lab setting)

  • Exposure to dataset curation, evaluation design, or reproducibility practices

  • Publications, thesis work, or high-quality open-source contributions

Why Join Us
Build from zero: This is a rare opportunity to join a startup at the earliest stages and help shape not just the product, but the foundation of the company. You’ll have real ownership over your projects and the chance to grow into a full-time role.
Hard, meaningful problems: We’re tackling some of the most interesting challenges in cloud infrastructure, scheduling, and performance optimization, at the intersection of hardware and AI.World-class hardware: You’ll be working directly with cutting-edge GPU hardware and helping build the most performant compute platforms in Europe.

Everything else: Compensation, mentorship, team events etc – it’s our job to make sure you have everything you need to learn fast and do your best work!Lyceum is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Lyceum is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.


Lyceum is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Lyceum is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.