Nvidia and Mistral AI partner to accelerate open-source AI
The two companies will optimize Mistral's new open-source Mistral 3 model family using NVIDIA's inference frameworks.
NVIDIA and Paris-based large language model (LLM) developer Mistral AI have formalized a strategic partnership to dramatically accelerate the development and optimization of new open-source models across NVIDIA’s sprawling ecosystem.
The collaboration, which follows joint work on the Mistral NeMo 12B model, aims to leverage NVIDIA’s platforms to deploy Mistral’s recently unveiled, open-source Mistral 3 family.
These models emphasize multimodal and multilingual capabilities and are designed for deployment from the cloud down to edge devices like RTX PCs and Jetson.
NVIDIA will integrate Mistral models with its AI inference toolkit, optimizing performance through frameworks like TensorRT-LLM, SGLang, and vLLM, while leveraging its NeMo tools for enterprise-grade customization.