Nvidia Targets Metaverse with Generative AI Models and NIM Microservices

Published on:

  • Aiming for metaverse applications, Nvidia adds new OpenUSD NIM microservices, boosting performance and efficiency.
  • With more to come, the new microservices include USD Code NIM, USD Search NIM, and USD Validate NIM.
  •  Nvidia’s approach centres on incorporating AI capabilities into metaverse applications to take the lead in the developing market.

The vendor introduces new generative AI models that will be available as microservices. The latest models show that the metaverse is still a part of Nvidia’s strategy.

AI hardware and software vendor Nvidia Systems has introduced new NIM microservices to the Universal Scene Description, or OpenUSD, standard for metaverse visual applications.

During SIGGRAPH, the computer graphics conference in July, Nvidia revealed that generative AI models for OpenUSD development will be available as Nvidia NIM microservices.

They are in preview now. The move comes after Nvidia introduced the microservices at its GTC developer conference earlier this year.

Nvidia Targets Metaverse with OpenUSD NIM Microservices

NIM microservices enable enterprises to create and deploy custom applications on their platforms. The new OpenUSD NIMs will let developers incorporate generative AI models, copilots, and agents into USD workflows. Microservices include USD Code NIM, USD Search NIM, and USD Validate NIM, all available in preview.

The USD Code NIM microservice answers general USD questions and generates OpenUSD Python code based on text prompts. The USD Search NIM microservice lets developers search through massive OpenUSD and image data libraries using natural language or image inputs. The USD Validate NIM microservice checks whether files uploaded are compatible with USD release versions.

Also, Read: AI Fever Propels Nvidia to Unprecedented Heights.

Other microservices, such as USD Layout NIM, USD SmartMaterial NIM, and fVDB Mesh Generation NIM, will be available soon.

Nvidia’s Metaverse Strategy Unveiled

Unlike the generative AI boom, the metaverse failed to gain immediate wide popularity. It remains primarily confined to video headsets for virtual and augmented reality and some industrial applications such as digital twins.

According to Forrester Research analyst Charlie Dai, the expansion of NIM microservices shows both Nvidia’s commitment to generative AI and its ambitions in the physical and digital worlds.

For the metaverse, Nvidia’s Omniverse platform continues to be a cornerstone of their strategy to enable the creation and connection of 3D virtual worlds,” Dai said. “These microservices are one of the stepping stones on this journey.”

One challenge for the metaverse is the lack of standardization to bring together the virtual environment’s elastic, scalable infrastructure, compute power, storage, and data. This made USD for 3D and metaverse data interchange formats difficult, according to Constellation Research analyst Andy Thurai.

With its NIM microservices, Nvidia hopes to bring generative AI capabilities to the robotics, metaverse, industrial design, and digital twin capabilities, Thurai said. With the visualization and simulation of environments through the USD Code NIM microservice, he added that Nvidia could help users revisit parts of the metaverse that were too difficult to develop before, such as the virtual and augmented reality worlds.

openusd-nim-microservices
Nvidia introduces new NIM microservices to the OpenUSD standard, targeting metaverse applications.[Photo: NVIDIA-Newsroom]

However, adoption will be the biggest challenge for the AI vendor. “The industrial areas they are taking on are too many and are very distributed both in the technology and in standards,” Thurai said. “It will be tough to convince customers to adopt this.” Meanwhile, he added that the Alliance for OpenUSD was created to help industrial companies adopt advanced technologies like the metaverse.

Besides supporting the industrial metaverse, Nvidia is also looking ahead, Thurai continued. Generative AI is slowing down in the adoption phase, and enterprises are not adopting the technology at the same pace they were experimenting. “If the market slows down, it could hit Nvidia hard,” he said. “They are staying ahead of the curve by thinking and innovating this and being a market maker again.”

Also, Read Nvidia GTC Highlights: AI alliance, Humanoid robots, 6G cloud platforms and more.

Generative AI Models Transforming Omniverse

In another development, Nvidia’s partner Getty Images revealed on July 29 that it updated its generative AI image-generating model. The updated model was built on the Nvidia Edify model architecture. Edify is part of Nvidia Picasso, a platform for creating and deploying generative AI models for visual design.

Generative AI by Getty Images and iStock, also from Getty Images, is now updated with image-generating speeds of about six seconds, enhanced detail in generated images, longer prompt support, and more control over output using shot type and depth of field. Users can also modify both generated AI images and existing preshot images.

In addition, Nvidia introduced fVDB, a deep learning framework for generating AI-ready virtual representations of the real world. The AI vendor also revealed that Hugging Face will offer developers Inference-as-a-Service powered by Nvidia NIM.

The introduction of Nvidia’s NIM microservices for OpenUSD and their continued advancements in generative AI models underscores the company’s commitment to the metaverse.

While the path to widespread adoption remains challenging, Nvidia’s strategy to integrate AI capabilities into metaverse applications demonstrates its ambition to lead in this emerging space. By providing the tools and infrastructure needed to create and connect virtual worlds, Nvidia is positioning itself at the forefront of the metaverse revolution.

Related

Leave a Reply

Please enter your comment!
Please enter your name here