Red Hat and Meta have announced a collaboration aimed at advancing generative AI for enterprise use. This partnership began with Red Hat’s initial support for the Llama 4 model family on its AI platform and the vLLM inference server. The companies are working together to align the Llama Stack and vLLM community projects, aiming to create unified frameworks that simplify open generative AI workloads.
Mike Ferris, Red Hat’s senior vice president and chief strategy officer, emphasized the importance of this collaboration: “Our joint commitment to Llama Stack, vLLM and the new llm-d project will help realize a vision of faster, more consistent and more cost-effective gen AI applications running wherever needed across the hybrid cloud, regardless of accelerator or environment. This is the open future of AI, and one that Red Hat and Meta are ready to meet.”
According to Gartner research, by 2026 over 80% of independent software vendors will incorporate generative AI capabilities into their applications. This highlights an urgent need for open foundations like those being developed by Red Hat and Meta. Their collaboration addresses the need for seamless generative AI workload functionality across various platforms.
The partnership also involves contributions to foundational projects such as Llama Stack, which provides standardized building blocks for generative AI applications. Additionally, Red Hat is contributing to enhance Llama Stack’s capabilities as part of its commitment to supporting diverse agentic frameworks on its platform.
Ash Jhaveri from Meta expressed enthusiasm about this partnership: “We are excited to partner with Red Hat as we work towards establishing Llama Stack as the industry standard for seamlessly building and deploying generative AI applications.”
This collaboration underscores both companies’ dedication to fostering open innovation in AI technology development.


