• Immersive Wire
  • Posts
  • Generative AI and the metaverse: Where we are, and where we are going

Generative AI and the metaverse: Where we are, and where we are going

Early signs show that generative AI will drastically shape how we build virtual worlds - to a point.

Generative AI has already appeared everywhere, as discussions spread faster than the common cold in a tech conference. The metaverse will be impacted too, but we got one step closer at CES. There, NVIDIA unveiled updates to its Omniverse platform that connects with Move.ai for body movements and Lumirithmic for facial meshes. Think of it as a cohesive toybox to deploy experiences with a spectrum of assisted tools, each ready to pluck and apply. “NVIDIA has one of the broadest spectrums in the world for deploying generative AI applications at scale,” said Jesus Rodriguez in a great piece.

I agree, but the ramifications are much wider than that. In 2023, we may see a wave of generative AI tools pout into metaverse platforms, perhaps as bits and pieces rather than a cohesive package like Omniverse. Simulacra is one of the first metaverse platforms to offer these features. Meta unveiled a ‘Builder Bot’ tool that lets users speak objects into existence within Horizon Worlds, commanding shapes to pop into virtual spaces. We will see more tools which will ease the building process and, perhaps, bring more people into the fold.

At the same time, I wanted to step away from speculation. Pundits often overhype generative AI, letting flights of fancy lift them from reality. Yes, the number of research papers on the topic is increasing, and we may see an AI tool that can build a fully-featured world through mere sentences. The seeds of this vision were planted in 2022 and before. But we are a long way from this cohesive package.

Still, green shoots are peeking out of the ground. Companies already offer generative AI solutions that can also benefit metaverse projects. For example, they can make:

So let’s follow these shoots, and explore what’s possible today and in the immediate term. Let's take a deeper dive at what needs to be done for continued innovation.

Generative AI will revolutionise how we make virtual worlds in the metaverse

Generative AI will revolutionise how we make virtual worlds in the metaverse. Credit: Midjourney.

Executive summary

A summary of my findings, with an expansion on each point throughout the piece:

  • Innovations in gaming are a precursor to metaverse developments. Generative AI is already helping the video game industry accelerate the time-consuming processes of asset creation, experience design and testing, allowing for faster development and iteration. The same benefits apply to metaverse projects, where virtual worlds can be easily built, tweaked and populated with NPCs, allowing for greater productivity and potentially transformative applications.

  • Early progress looks promising. Generative AI tools are already being used or experimented with in metaverse platforms, such as Horizon Worlds and Simulacra, to ease the building process and attract more users. Further, companies are already offering generative AI solutions that can be used in metaverse projects, such as assets, textures creation, music generation, animation and NPCs.

  • Early innovations would involve virtual world templates. A clear example of its potential would be to generate assets and virtual worlds, which players can then shape into their own vision.

  • Complementary, not a replacement. Generative AI is helpful for the creative process, but it cannot replace humans in the short term. What's more likely is that humans will operate on both sides of an AI content tool, shaping what goes in and trimming what comes out.

  • Great editing tools must complement AI. While it can be used to create content on virtual platforms, it may lead to low-quality content if the tools to shape it are not functional. To ensure that users are able to experiment and adopt the technology, it is important to have streamlined editors that can help to edit the content.

If you find the analysis helpful, consider subscribing to a weekly briefing on the metaverse from the Immersive Wire:

How is AI related to the metaverse?

AI is already being used across multiple metaverse platforms, from curating experiences that it recommends to you to generating avatars based on inputs. The difference here is scale; generative AI has the potential to step beyond smaller frameworks, to generate whole worlds and assets based on text or voice inputs.

Take video games, where AI is already making waves in the industry. It’s not a surprise; of all industries, video games will likely be the most disrupted by the trend. James Gwertzman and Jack Soslow at a16z Games point to the time-sucking processes of asset creation, experience design, and iterative testing. Assets that could have taken weeks to finalise can take just a few hours, with some minor tweaks and embellishments. The fast process expedites key parts of the video game process that may have taken months or years, condensed into a short burst which can then be applied to the next phase of the process. The swift rendering is important, too. If assets can be tweaked and then tested quickly, then it saves the employee’s time with iterative evolution and finalisation.

The same benefits apply to metaverse projects, too. Virtual worlds can be built based on simple inputs, to lay the framework. Gameplay additions can be added as well, then tweaked based on the requirements of the user. Once built, the virtual world can be populated with NPCs that follow set instructions, speaking without needing voice actors. “There is going to be a time when developers are going to spend less time writing code and more time expressing what they need,” said Danny Lange, senior vice president of AI at Unity. “That increased productivity is going to be transformative for applications of the metaverse outside a narrow space, like gaming.”

These experiences will not match the quality of AAA titles in console gaming. God of War: Ragnarok would lack impact if Kratos – played by the award-winning Christopher Judge – spoke with a replicable voice and intonation. Nor would AI tools match the exact specifications for the vines in Vanaheim, or the frosted mountains of Niflheim. But it cuts the process down so much that it makes it easier for anyone to make worlds or spaces, which encourages innovation and experimentation. It’s the elixir of tools, potential and connectivity that could make metaverse platforms thrive.

The technology will not replace dedicated gaming studios, but it will complement work processes

The technology will not replace dedicated gaming studios, but it will complement work processes. Credit: Midjourney.

Auto-generating virtual worlds as templates

One is generating worlds. I’m not the best builder - my Lego sets attest to that. The same goes for making a website, even with helpful frameworks. But building virtual worlds, on any platform, takes the complexity to a new level. Professionals worry about the position and design of objects, or the interactions of NPCs, or the exact height and shape of locations. A far cry from a simple 2D screen with images or words.

Savvy creatives can build web-based experiences via A-Frame, or use the tools already supplied within metaverse platforms. Horizon Worlds has a great building experience, as does Rec Room. They all work, but additional components can help smooth out the ideation process. Plus, frictionless tools expand adoption across industries; Wordpress made it easier for people to build websites, and TikTok’s editor made it easier to create short-form content. Generative AI can auto-populate virtual worlds via text or voice prompts, which can save time and effort. The same tools can also generate images and clips that can be published across multiple formats, from websites to YouTube.

We already have tech that can do this. NVIDIA GET3D can generate 3D images based on 2D inputs that can then be used as assets. “GET3D brings us a step closer to democratizing AI-powered 3D content creation,” said Sanja Fidler, vice president of AI research at NVIDIA. “Its ability to instantly generate textured 3D shapes could be a game-changer for developers, helping them rapidly populate virtual worlds with varied and interesting objects.”

Going beyond the hype

Though exciting, generative AI has sparked sky-high thinking that filled the heads of pseudo-professionals with fumes of nonsense. One narrative focuses on replacement, and how it will replace large swathes of the workforce. Nathan Benaich, general partner at Air Street Capital, argues for nuance in an interview with Sifted. “The reality is – even though the progress right now seems like it’s exponential in images, video and text – I think there’s just so many nuances that companies need to solve for when they take these capabilities and build products for many people to use in a workflow.”

Personally, I do not see artists disappearing from the process. Humans are needed at all stages of the creative process in order to present a cohesive vision. Think of it as two humans on either side of an AI machine. One types out what's needed; the broad shape of an idea, with parameters and conditions. The machine churns and works, doing the hard work of bringing the image to reality. When it is spat out on the other side, the second human clips and shapes the asset into a closer vision of what the team needs. The AI is in the middle picking up the hard work of design, building, and colour – but not a replacement.

I see generative AI as a supplementary tool for metaverse development. Like video games, it will provide an assistive supplement to the creation of virtual spaces.

We have a long way to go before generative AI will create seamless worlds in the metaverse

We have a long way to go before generative AI will create seamless worlds in the metaverse. Photo credit: Midjourney.

Predictions on how to use generative AI well with the metaverse

Generative AI will have a huge impact. It assists with the creative process, laying out the framework of a virtual world which can then be tweaked, edited, or replaced in the final version. I can see a deluge of low-quality virtual worlds sprouted by an AI, as a hodgepodge of items dedicated to popular IPs. We will no doubt see a virtual world where assets are inspired by a blend of Pirates of the Caribbean and Harry Potter. (Also, it would not shock me if these worlds are popular too, because players search with high-volume keywords across platforms).

I also predict that a metaverse platform will catapult to mainstream stardom if it is web-based and offers generative AI elements. The winning platform may be frictionless to use (perhaps web-based), easy to generate worlds, and have a discovery feature based on keywords. The potent cocktail would generate a creator-led ecosystem which may rise in popularity swiftly. Simulacra is well-positioned, based on its previously-discussed announcements.

Finally, these outputs need to be complemented with a robust set of creator tools to shape the content itself. Generative AI could work on virtual platforms, but it could lead to a deluge of low-quality content if the tools to shape them do not function well. If a world is made, but it is difficult to edit, then users may clock off quickly. Streamlined editors are important for experimentation and adoption; the same is true here, as a complement to the technology.

I do not believe AI will replace human creativity. Ultimately it is a tool, and it can be used well or poorly. As Mokokoma Mokhonoana once said, “the usefulness of a thing is dependent not on what it is, or how it can be used, but on the needs or wants of the user.”