Last week’s Red Hat Summit 2025 in Boston had a great deal to say about the crucial role that open source will play in the future of enterprise Artificial Intelligence (AI).
Red Hat introduces new capabilities to help make AI a reality for the enterprise, announced major artificial intelligence (AI) collaborations with Boston University and Expero Inc., and introduced a new AI and machine learning platform on Red Hat OpenShift, the world’s leading enterprise Kubernetes platform.
One of the big announcements was the release of the Red Hat AI Inference Server, based on the open-source vLLM project. This is intended to give businesses a supported, production-ready inference server that can deploy and scale generative AI models to any Linux platform or Kubernetes distribution, including Red Hat OpenShift.
The move is indicative of Red Hat’s efforts to provide flexibility and save costs in AI deployments, when companies are working to adopt AI on top of their existing infrastructure.
Reinforcing this vision, Red Hat announced the creation of llm-d, a new open source community project that aims to help accelerate distributed generative AI inference. In collaboration with the industry giants such as Google Cloud, IBM Research, and NVIDIA, llm-d aims to invent novel inference techniques to scale and make efficient large language model (LLM) deployments. This effort speaks to the increasing recognition of the role of effective inference in extracting the business value of AI.
Red Hat unveiled broader integrations as well, including with Nvidia, to deliver OpenShift AI that is integrated with the Nvidia Enterprise AI Factory for deploying AI systems agently in hybrid cloud.
And there was also an announcement of a collaboration with Meta to develop open-source generative AI for business uses. This partnership also involves upstreaming support for vLLM on Google Cloud and integrating Meta’s Llama Stack into the Red Hat AI platform.
The company also announced Red Hat OpenShift Lightspeed, a generative AI subservant based on machine learning that’s built into Red Hat OpenShift.
This will help to make complex IT processes easy by offering customized knowledge, personalized to the organizational context, delivered at the time and place where that knowledge can be most useful – right in the OpenShift web console It’s aiming to increase operational efficiency.
On a related note, RHEL 10 will adopt RHEL Lightspeed for the system administrators’ command line experience, which will include AI-driven assistance.
These releases at Red Hat Summit 2025 are evidence to the manifest destiny of an open, agile, and teamwork oriented future for enterprise AI. By leading open-source innovation and driving key industry partnerships, Red Hat is committed to enabling organizations to more easily extract business value from AI and machine learning through the ability to expertly deliver and scale intelligent applications across hybrid and multicloud environments.