Multiverse Computing pulls in $215M to compress AI models with quantum-inspired tech

Share now

Read this article in:

Multiverse Computing pulls in $215M to compress AI models with quantum-inspired tech
© Vithun Khamsong / Getty Images

Spanish startup Multiverse Computing has raised a €189 million (~$215 million) Series B round to scale its quantum-inspired AI compression tool, CompactifAI.

The tech can reportedly shrink large language models (LLMs) by up to 95% without degrading performance, enabling faster, cheaper inference.

Leaner models for broader deployment

Multiverse Computing’s “slim” models are compressed versions of open-source LLMs like Llama 3 and 4, Mistral, and soon DeepSeek R1. These models run 4–12x faster and cost 50–80% less to operate. Some are compact enough to run on edge devices like PCs, cars, and even Raspberry Pi boards.

Advertisement

Quantum physics roots meet deep learning

Co-founded by CTO Román Orús, a physicist at Donostia International Physics Center, the company uses tensor networks—a classical computing technique inspired by quantum mechanics—to compress models. CEO Enrique Lizaso Olmos brings a background in mathematics and banking to the leadership team.

Heavyweight investors and global traction

The round was led by Bullhound Capital with participation from HP Tech Ventures, Toshiba, Santander Climate VC, and others. Multiverse Computing claims 100 global customers and 160 patents. With this round, its total funding reaches $250 million.

Advertisement

Get the top Stories in your Inbox

Sign up for our Newsletters
[mc4wp_form id="399"]