Date Icon - Meetups X Webflow Template

Dell Technologies building AI Factory with Nvidia, growing AI efforts with Hugging Face, Meta and Microsoft

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias,

Dell Technologies building AI Factory with Nvidia, growing AI efforts with Hugging Face, Meta and Microsoft

Dell Technologies is growing its generative AI capabilities with a series of new capabilities announced today at the annual Dell Technologies World conference.

The Dell AI Factory is the company’s new strategy for technologies and services designed to help make AI adoption simpler, more secure and more economical for enterprises. The offering includes a significant expansion of capabilities with Nvidia, going beyond the solutions the two companies detailed in July 2023. The Dell AI Factory with Nvidia, integrates hardware and software from both Dell and Nvidia to help enterprises with gen AI initiatives. Among the specific use cases the Dell AI Factory with Nvidia is helping to target are advanced Retrieval Augmented Generation (RAG) and digital assistants. Dell is also rolling out new hardware to support the recently announced Nvidia Blackwell GPUs.

Dell is also advancing its integration with Hugging Face that was initially announced in November 2023 with more enterprise integrations. The same is true for Dell and Meta, where the two companies are building on the existing partnership for Llama to support Llama 3.

While there are many options for gen AI in the cloud, the overarching goal for Dell is to make it as easy as possible for enterprises to assess gen AI and implement technologies on-premises.

Dell AI Factory for Nvidia moves beyond Project Helix

This isn’t the first Dell Technologies World event where there has been AI news with Nvidia.

A year ago, the two companies announced Project Helix, as an approach to help organizations build and deploy gen AI. A lot has changed in the enterprise gen AI landscape over the past year and the Dell AI Factory with Nvidia is in part a reflection of those changes.

“When we launched Project Helix, it was very heavily on the training side,” Manuvir Das, VP of enterprise computing at Nvidia told VentureBeat.  “Now there’s a lot more on the inference side of the house actually using models.”

With a shifting demand toward inference as well as RAG use cases, the recently announced Nvidia NIMs (Nvidia Inference Microservices) approach is now coming to the Dell partnership. Das explained that NIMs help by taking a major deployment problem out of the hands of developers. When a NIMs container is placed on a server, it figures out exactly how to set up the environment to run AI models efficiently, securely, and with the right optimizations for the hardware. This allows developers to just write their applications without having to worry about the lower-level details of deploying and executing models.

Subscribe to our newsletter

Lorem ipsum dolor sit amet consectetur adipiscing eli mattis sit phasellus mollis sit aliquam sit nullam neque ultrices.

Thanks for joining our newsletter.
Oops! Something went wrong.
Mailbox Subscribe To Our Newsletter - Meetups X Webflow Template