Nvidia, a multinational corporation and technology company, has announced the release of a new service that helps enterprises and countries build customized and specialized supercomputers using Meta’s latest LLM, Llama 3.1.
Nvidia released two new services that target enterprises and countries. The technology giant released a service within Nvidia AI Foundry and inference microservices within NVIDIA NIM. Both services leverage Meta’s latest open-source LLMs library, Llama 3.1, and can create generative AI supercomputers.
“…NVIDIA AI Foundry has integrated Llama 3.1 throughout and is ready to help enterprises build and deploy custom Llama supermodels.”
The generative AI models of Llama 3.1 are trained on more than 16,000 Nvidia H100 Tensor Core GPUs. In addition, they’re optimized for Nvidia’s accelerated computing and software which enables deployment in data centers, clouds, and on GPU powered personal computers.
Also Read: Meta unveils biggest version of Llama 3 AI model
Currently, many companies worldwide have access to NIM microservices for Llama. Accenture is the first client to build custom Llama supermodels for Aramco, AT&T, and Uber. These corporations will be the first to access NIM microservices using Llama 3.1. After successfully creating custom models, an enterprise can choose Nvidia’s microservices, an OPs platform, and a cloud platform to run the models.