Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Developers seem ready to embrace tools that use generative artificial intelligence, but want to avoid the legal hurdles.
It's 2023, and generative AI is the hot buzzword of the year. Tools like Midjourney and ChatGPT began bubbling to the surface of popular usage in 2022, and now that we're well into the new year, everyone from Microsoft to Google to Meta is making a hard play at selling these tools.
After two years of blockchain obsession, the game industry's venture capital class seems to have quickly pivoted to a fixation on how these tools could be used to make games better, faster. However hyperbolic their claims get, it's hard to deny that they have a point. Some tasks in game development take a painfully long time, and the industry has long embraced software and game engines that can speed up those processes.
A gathering like DICE 2023 was the perfect chance to check in on how industry leaders and top creatives in the field feel about the technology, and to our surprise, the vibes were mostly...good? Developers at companies of all shapes and sizes expressed enthusiasm for what generative AI tools could accomplish, and also expressed a healthy amount of moderation.
By now the legal questions about using tools like Midjourney have bubbled to the surface, and if nothing else, it seems that developers want to make sure they're not stealing anyone else's work when they're using these programs.
The overall tone felt markedly different from our visit in 2022, when opinions on blockchain technology in games ranged from hype-fueled fever dreams to outright hostility. This year, talk of blockchain became an almost literal punchline, while general conversation pivoted to the question of generative AI.
There were plenty more conversations between folks with different opinions this year, and a hesitancy to go all-in on technology without understanding it.
Based on our chats, we feel reasonably confident that the video game industry will begin adopting these tools in 2023, and if their promises of speeding up production hold true, games that are built with these tools could show up as early as next year.
Across all of our conversations, there was a universal agreement among game developers: everyone wanted the game-making process to be easier.
And an easier game development process means that smaller teams can produce more content or bigger games. Owlchemy Labs cofounder Devin Reimer compared this moment to the arrival of licensed game engines. When smaller developers no longer needed to roll their own engines and invest in their long-term maintenance, they were able to turn out more games at a faster pace.
"I think that's the thing that's kind of lost [in these discussions] is like, all of a sudden, there's a new group of people that might not have been able to cross that quality bar who now might have the power to do so," he said.
His fellow c-suite "owl," Andrew Reimer, agreed, calling generative AI a type of tool that follows an industry trend. "When I started in this industry, and you were like, 'I want to make a game,' you had to open up DirectX textbook, and read how to put triangles on a screen," he recalled. "Now I see Global Game Jam games now that look better than anything I made for the first five years of my career."
How might Owlchemy Labs adopt these tools? The two pointed to the studio's focus on comedy games, and how comedy can be really hard to prototype and iterate on from a technological standpoint. They discussed using generative AI voice tools to experiment with jokes before going on to work with professional voice actors in the final product (a practice that, admittedly is already generating some controversy).
Reimer was also excited about the idea of quickly producing variations on 3D assets to more quickly build compelling environments in virtual reality.
Different members of the God of War: Ragnarok team also expressed interest and optimism about AI tools across different departments. God of War: Ragnarok design director Jason McDonald expressed optimism that AI tools could be used to create more dynamic and deep NPC interactions,
Lead music producer Peter Scaturro said he expects be using AI tools when producing game soundtracks in the future. "I don't see it as a replacement for what we're doing, what we're trying to convey artistically," he said. "It's going to assist in a big way during the production process." He described wanting to use these tools to cut down on the "labor-intensive" tasks that can come with producing his team's award-winning work.
Even studio art director Raf Grassetti and art manager Tim Spangler expressed optimism about these AI art tools—even while the art world has pushed back the hardest against the technology. "We understand the community and how they feel about it," Spangler said. "We're very sensitive to that. But at the same time, we want to see where it goes and be in the right position to figure out the right use."
Double Fine CEO Tim Schafer described how ideally, AI tools could be used to jog the brains of creatives. He compared the process of using them to how concept artist Peter Chan used to draw shapes and characters on a piece of paper, then rub them around on a Xerox machine to create interesting shapes and colors that might inspire the company's art direction.
"I really don't think AI will replace [artists], because art is about being in contact with another person and being in contact within their consciousness," he explained. "If AI becomes so compelling, they just feel like another consciousness—maybe people would be excited about that."
Double Fine sure likes making games about touching other people's consciousnesses.
Controversies over AI boil down to two major categories: first, there are the questions about how these tools will be used. Companies like CNET have already floated the balloon of having generative AI tools like ChatGPT write articles, which led to semi-disastrous results.
Developers and artists have expressed nervousness over how eager business leaders are to replace their work, and no matter how things shake out, those anxieties probably aren't going away.
Those fears are valid (I cannot stress enough how a month of media industry layoffs and talk of replacing writers with AI stressed me out as a tradesperson who strings together sentences for a living), but industry leaders at DICE didn't seem as bothered by them. Instead, the number one question on everyone's mind was "will it be legal to use these tools?"
For context, legal questions about who owns the right to AI-generated art are already stumping legal experts. There are some major problems with popular tools like Midjourney or other Stable Diffusion tools that rely on visual data scraped from an open-source LAION dataset.
Those open-source datasets were assembled by scraping the internet for art and photographs for the purpose of academic research, but now are being adopted by toolmakers using them for commercial purposes. And that is a massive legal can of worms. Some users have found images from their own medical exams in the dataset, and artists have noticed that these tools regularly insert identifiable elements of their work without giving any credit or royalties.
(It doesn't help that Midjourney's lead developers seem uh, hostile to the notion of respecting copyright).
And at the other end of the equation, the United States Copyright Office has already begun kicking back copyright requests for AI-generated work. It's been permissive about the original ideas and writing that goes into some of these products, but cautious about work generated from such datasets.
So what does a more ethical AI tool look like? We spotted one that seemed to solve the dataset-driven issues: Didimo. Didimo is a startup that previously focused on technology that would let developers capture real-world faces and turn them into video game avatars.
It was cool technology, but it existed in a crowded market. Then, as Peter Alau and João Orvalho (senior director of business development and chief technology officer at Didimo, respectively) put it—generative AI arrived, and they realized their research and technology could be used to generate thousands of 3D avatars for game developers.
In a quick demo, Alau and Orvalho showed off how the tool works. It didn't look that different from other game development tools—and actually it closely resembled many in-game character creators that developers create these days. The pair were able to show up a huge library of avatars that their software had quickly generated, and how developers could alter those avatars by adjusting a number of sliders. All the models come fully rigged and ready to animate.
The general use case Didimo is pitching developers on isn't using this tool primarily for player characters or high-value NPCs, but more background characters and for filling out open worlds. You might use it to create unique-looking soldiers for a strategy game, or fill out an open-world city like the one of Marvel's Spider-Man.
Alau said that this tool also saves on memory by building one original model, and then all other variants are created using "blend-shape variants" that use a tiny amount of data. He said that developers would be able to put "thousands" of characters onscreen without eating into the memory budget.
We asked what dataset this avatar tool was pulling from, Alau and Orvalho explained that it was all proprietary data or data licensed from research institutions. "We have our own internal model built on [original] scans, Orvalho said. He explained that it's all based on research into how to render faces in three dimensions, conducted first over 15 years at a university, and then in the last six years of the company's history.
"It's all photos that we've taken of ourselves or photos where we've gotten the likeness rights," Alau added. "We are not taking other people's work."
So once developers license the tool, they'll be able to claim full ownership over any characters it generates. That clears the legal hurdle, but we did want to know—what does the team at Didimo think about concerns over job replacement?
Alau took point on this topic, saying that startups working on generative AI creation tools should "focus on what the customer needs, not 'what's the fastest way to make money.'"
"I see way too many people in our space that say 'oh, well, if I do this really quickly and go really fast to this, I can get to that and get to my exit,'" he said. "The goal is the exit, and you can tell that in the pitch."
Alau added that in many of the meetings he took at DICE, developers were asking if tools like this would be used to cut jobs or replace humans at a game studio. "We look at this work as accelerating the productivity of all the artists," he said. One of Didimo's clients apparently did the math, and they managed to take the cost of filling a street scene from $1 million to $100,000, by reducing the amount of labor needed to cross the finish line.
"It's not the democratization of [game tools], it's the acceleration of them," he said.
The folks who expressed the most cynicism about AI tools were actually still optimistic about how it could help their teams. But they were ready to call out that these technologies could be used to eliminate some of the very fulfilling and human reasons that people make or play video games.
Bay 12 Games cofounder Tarn Adams told us that he's mostly excited for what artists can do for these tools, and that he's ready for people to take them in directions we haven't seen yet. But he also recalled how the arrival of AI-driven Chess opponents impacted the world of competitive Chess.
He recalled how optimists declared that "the human will work with the computer, and that will make the ultimate Chess player." But that's not exactly what happened. "The chess computers just won and destroyed them," he pointed out. Some champion Chess players even quit the sport out of frustration of playing against these machines (though some recent exploits in defeating their programming have shown there may yet be open avenues of attack against this particular program).
Wherever generative AI tools take the industry, developers like Schafer seemed to have a fixed north star that would guide Double Fine's path with these tools. "Art is about feeling like 'I'm not alone in the world because I just made contact through art with another soul,'" he said.
If these tools can still be used to build that connection—to carry through intent, expression, and what we love or hate about our world, then they're likely to be welcomed by developers in the years ahead.
Update: This story previously contained a misspelling of Didimo's name. It has been updated with the correct name.
You May Also Like