Multiverse Computing, the quantum-inspired artificial intelligence leader, today announced a monumental $215 million Series B round of funding. Vault’s strategic investment will fuel the broad rollout of its revolutionary LLM compression technology, CompactifAI, which is set to revolutionize the economics and availability of large language models.
In all, the company has raised about $250 million, including $180 million in equity and $35 million in grants and GPU resources. The round was led by Bullhound Capital, with HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC and Toshiba participating.
The investment speaks to the profound belief in the innovative solution to the extremely high computational and energy cost entailed in LLM, provided by Multiverse Computing.
Large Language Models, although have been transformative, require huge computational and energy consumption which increase the cost of datacenter and do not allow easy reach for everyone. Conventional compression approaches substantially degrade the performance.
Multiverse Computing’s CompactifAI, on the other hand, uses quantum-inspired tensor networks to compress at a level unseen before. The method can shrink LLM sizes by as much as 95 percent, with only a 2 to 3 percent drop in accuracy — a dramatic difference from the typical 20 to 30 percent decrease in accuracy of other approaches.
Concretely, this result brings significant advantages: CompactifAI brings 4-12 times faster inference speed, 50-80% reduction of energy cost 84% more energy-efficient than state-of-the-art methods, 50% training time faster, 25% increase of speed on rendering/inferring. That means AI models can be deployed more efficiently and commercially, including on edge devices like smartphones, laptops, cars and drones.
Founded in 2019, Multiverse Computing started out as a provider of quantum computing services to every business sector. The decision to jump to LLM compression in 2023, offering their data crunching skills in a quantum package has been blindingly right. The firm currently has compressed versions of popular open-source models like Llama 4 Scout, Llama 3.3 70B, Llama 3.1 8B, and Mistral Small 3.1, as well as more models like DeepSeek R1 in the pipeline.
As a leading provider for over 100 global clients, including some of the largest names in healthcare, finance, and government, Multiverse Computing’s vision is equipped to lead the technological revolution.
The new capital will also go toward adding employees at scale and buying additional GPU resources, which will speed along the development and deployment of more LLMs, which in turn will break open AI to more people and drive a new age in efficient and sustainable AI.