Connect with us

Software Technology

Nvidia revealed a mind-blowing demo showing how it feels when actually speaking to AI game characters.

Published

on

Nvidia demo

Nvidia’s demo shows what it looks like when the realms of gaming and AI merge completely, we can only imagine the visual splendor and anticipate an auditory masterpiece

During the recent Computex 2023 event held in Taipei, Nvidia CEO Jensen Huang presented an exciting demonstration that showcased the potential convergence of gaming and artificial intelligence (AI). The demonstration featured a visually stunning rendering of a cyberpunk-themed ramen shop, where an intriguing feature allowed players to engage in natural and immersive conversations with virtual characters.

Rather than relying on traditional dialogue options that require clicking, Nvidia envisioned a future where players can simply hold down a button, speak using their own voice, and receive responses from in-game characters. This innovative approach represents Nvidia’s vision for the future of gaming.

Nvidia is in direct fight with Chat GPT-4

While the demonstration provided a captivating glimpse into this technological advancement, some critics pointed out that the quality of the dialogue fell short of expectations. They suggested that upcoming iterations of language models such as GPT-4 or Sudowrite might offer enhanced dialogue generation capabilities, surpassing the current level of realism.

Nvidia’s initiative to integrate voice-based interactions into gaming experiences holds immense potential for creating more immersive and engaging gameplay. As the development of AI progresses, we can anticipate future advancements that will further refine the interactive capabilities within video games, bringing players closer to a truly realistic and interactive virtual world.

Upon observing a single video conversation, one might struggle to discern how this offering improves upon the selection process of non-player character (NPC) dialogue trees. Nonetheless, the noteworthy aspect lies in the fact that the generative AI responds to natural speech. The company can make the demo accessible, enabling individuals to experience it firsthand and witness the emergence of distinct outcomes.

Nvidia shows off ACE (Avatar Cloud Engine)

Nvidia, in collaboration with Convai, developed this demonstration as a means to promote the tools employed in its creation. Specifically, the demonstration leverages a suite of middleware called Nvidia ACE (Avatar Cloud Engine) for Games, which can be executed both locally and through cloud computing. This comprehensive ACE suite encompasses NeMo tools, facilitating the deployment of large language models (LLMs), along with Riva speech-to-text and text-to-speech functionalities, among other components.

Admittedly, the demo encompasses more than just the aforementioned tools. It is constructed within the Unreal Engine 5 framework, incorporating abundant ray-tracing techniques. The resulting visual presentation is undeniably breathtaking, which inadvertently diminishes the appeal of the chatbot component in my estimation. At this stage, we have encountered significantly more captivating dialogues generated by chatbots, even considering their occasional banality and lack of originality.

In a Computex pre-briefing, Nvidia VP of GeForce Platform Jason Paul told me that yes, the tech can scale to more than one character at a time and could theoretically even let NPCs talk to each other — but admitted that he hadn’t actually seen that tested.

It’s not clear if any developer will embrace the entire ACE toolkit the way the demo attempts, but S.T.A.L.K.E.R. 2 Heart of Chernobyl and Fort Solis will use the part Nvidia calls “Omniverse Audio2Face,” which tries to match facial animation of a 3D character to their voice actor’s speech.

Trending

Copyright © 2023 Futurfeed