About
Felix is an Infrastructure Engineer at Lyceum, bringing extensive experience in distributed systems, DevOps, and cloud infrastructure.
Before joining Lyceum, Felix was an Infrastructure Engineer at Optiver, where he provisioned and operated bare metal Linux servers across global data centres for nanosecond-performance trading systems. Prior to that, he spent over two years at Spotify as a Software Engineer, building CI/CD platforms that handled 60,000+ daily builds and operating centralised VCS infrastructure including GitHub Enterprise.
Felix holds an MSc in Software Engineering of Distributed Systems from KTH Stockholm and a BSc in Industrial Engineering and Management from KIT Karlsruhe. His international experience spans Germany, Sweden, Netherlands, Singapore, and Japan, including roles at Bosch and CERN research projects.
Published Articles
GPU Cost Optimization
- • Strategies to Reduce GPU Cloud Costs for ML Training
- • A100 vs H100 for LLM Inference: The Engineer’s Guide to Efficiency
- • The Cost Per Training Run Calculator: A Guide for ML Engineers
- • Stopping the Bleed: The $15B Crisis of GPU Overprovisioning
- • GPU ROI: Beyond the Hourly Rate in ML Infrastructure
- • GPU Selection Guide for ML Training: 2026 Performance Benchmarks
- • H100 vs A100 Cost Efficiency: A Technical Deep Dive
- • How Many GPUs for Model Training? A Practical Scaling Guide
- • Optimize Slurm GPU Allocation for High Performance AI Workloads
- • How to Right Size GPU Instances for ML Workloads
- • Navigating the AWS GPU Price Increase in 2026
- • Best GPU for Llama 3 Fine-Tuning: A Technical Engineering Guide
- • Colocation vs Cloud GPU for ML: An Engineering Guide
- • CoreWeave vs Lambda GPU Cloud: The ML Engineer’s Guide to GPU Clusters
- • Dedicated GPU vs Cloud Instance: The Engineer's Guide to AI Infrastructure
- • Solving the 40 Percent GPU Cluster Utilization Problem
- • GPU for 7B vs 70B Model: A Technical Infrastructure Guide
- • GPU Memory Requirements for Transformer Models: A Technical Guide
- • H100 80GB vs A100 80GB: Fine-Tuning Performance and TCC Analysis
- • Lambda Labs vs RunPod vs Vast.ai: Choosing Your GPU Cloud
- • Which GPU for Fine-Tuning 70B Models? A Technical Guide
Other
- • Hardware Recommendations for LLM Fine-Tuning: The 2026 Guide
- • AWS P5 H100 Pricing Per Hour 2026: A Technical Cost Analysis
- • Egress Fees GPU Cloud Comparison: The Hidden Cost of AI
- • The Engineer's Guide to GPU Clouds with No Egress Fees
- • Nvidia H100 Availability Europe: A Guide for AI Engineering Teams
- • Spot Instance GPU ML Training: A Technical Guide for AI Teams
Want to join the team?
We're always looking for exceptional engineers and builders who care about infrastructure.