Creating a video game demands hard, repetitive work. How could it not? Developers are in the business of building world, so it’s easy to understand why the games industry would be excited about generative AI. With computers doing the boring stuff, a small team could whip up a map the size of San Andreas. Crunch becomes a thing of the past; games release in a finished state. A new age beckons.
There are, at the very least, two interrelated problems with this narrative. First, there’s the logic of the hype itself—reminiscent of the frenzied gold rush over crypto/Web3/the metaverse—that, consciously or not, seems to consider automating artists’ jobs a form of progress.
Second, there’s the gap between these pronouncements and reality. Back in November, when DALL-E was seemingly everywhere, venture capital firm Andreessen Horowitz posted a a long analysis on their website touting a “generative AI revolution in games” that would do everything from shorten development time to change the kinds of titles being made. The following month, Andreessen partner Jonathan Lai posted a Twitter thread expounding on a “Cyberpunk where much of the world/text was generated, enabling devs to shift from asset production to higher-order tasks like storytelling and innovation” and theorizing that AI could enable “good + fast + affordable” game-making. Eventually, Lai’s mentions filled with so many irritated replies that he posted a second thread acknowledging “there are definitely lots of challenges to be solved.”
“I have seen some, frankly, ludicrous claims about stuff that’s supposedly just around the corner,” says Patrick Mills, the acting franchise content strategy lead at CD Projekt Red, the developer of Cyberpunk 2077. “I saw people suggesting that AI would be able to build out Night City, for example. I think we’re a ways off from that.”
Even those advocating for generative AI in video games think a lot of the excited talk about machine learning in the industry is getting out of hand. It’s “ridiculous,” says Julian Togelius, codirector of the NYU Game Innovation Lab, who has authored dozens of papers on the topic. “Sometimes it feels like the worst kind of crypto bros left the crypto ship as it was sinking, and then they came over here and were like, ‘Generative AI: Start the hype machine.’”
It’s not that generative AI can’t or shouldn’t be used in game development, Togelius explains. It’s that people aren’t being realistic about what it could do. Sure, AI could design some generic weapons or write some dialog, but compared to text or image generation, level design is fiendish. You can forgive generators that produce a face with wonky ears or some lines of gibberish text. But a broken game level, no matter how magical it looks, is useless. “It is bullshit,” he says, “You need to throw it out or fix it manually.”
Basically—and Togelius has had this conversation with multiple developers—no one wants level generators that work less than 100 percent of the time. They render games unplayable, destroying whole titles. “That’s why it’s so hard to take generative AI that is so hard to control and just put it in there,” he says.
Learn More: technology clipart,technology student association,technology management,technology readiness level,technology acceptance model,technology gif,technology transfer,technology consultant,technology package,technology addiction awareness scholarship,is technology good or bad,technology networks,technology movies,technology gap,technology jokes,is technology limiting creativity,technology leadership,technology drive,technology zero,technology help,technology 100 years ago,technology project manager,technology house,technology unlimited,technology background images,technology readiness level dod,g technology ssd,technology economics definition,technology obsolescence,is technology science,technology life cycle