Sponsored By

How Shinra's streaming tech works, and what it means for game devs

Shinra's Colin Williamson seeks to shine some light on how the company's remote rendering technology works and why game developers should care about this new spin on the game streaming service.

Alex Wawro, Contributor

February 5, 2015

8 Min Read
Game Developer logo in a gray background | Game Developer

Late last year Square Enix established Shinra Technologies, a standalone venture led by former Square Enix CEO Yoichi Wada which seeks to build game streaming tech that developers can use to power their own projects.

Now, Shinra is eager to start wooing developers to its platform. Last month it held its first North American developer event in Portland; this week the company launched its own developer accelerator program with the goal of encouraging folks to make games that take advantage of Shinra's streaming tech, which is expected to begin a technical beta-testing phase later this month in Japan.

"As with any new platform, it’s a lot of work to convince developers and publishers that we’re worthy," says Shinra's James Mielke. "But we believe that remote rendering is the future of gaming, and we’re approaching the challenge very seriously."

But the details of that approach are still nebulous. In this edited interview, Shinra's developer partnerships chief Colin Williamson (pictured) seeks to shine some light on how the technology works and why game developers should care about yet another game streaming service.

I think a lot of game devs still see Shinra as "Square Enix's shot at building game streaming tech" and thus not super-applicable to their personal work. Why should devs, especially independent devs, be interested in tying their games into Shinra?

Colin Williamson: Our goal is to explore the huge break in design that presents itself when you remove game calculation from a box under your TV, and move it into a supercomputer in the data center where everyone’s connected via high-speed Internet. We’re not thinking in terms of individual game processes that are networked together across multiple geographies; we’re thinking of one gigantic, centralized process that’s running everything.

 

"The need to write complex network code for multiplayer games, well, that's basically gone."

Our engineering team calls it 1:N architecture, where you have one supercomputer driving many, many clients at once -- that’s the gameplay code and rendering -- all in one place. Think of it as a mega-instance of a game where each client is only providing controller input from their side, and looking into to the game instance via their own viewport, which is streamed back as video. The big draw to this is that all of the gameplay calculations are only happening once; the need to write complex network code for multiplayer games, well, that’s basically gone.

I’ve talked to a ton of developers who had to scrap their plans for making multiplayer very early in development. Most usually hit the barrier of “Oh, I can’t reliably synchronize the number of gameplay entities that are on screen,” or they have to scale things back to the point where the the essence of single-player mode’s fun isn’t there anymore -- not to mention multiplayer QA is a nightmare. I want our system to solve a lot of that.

With the Shinra technology, the goal is for you to build the core game like you’d build a local co-op or split-screen title. You’re not going to have to worry about synchronizing gamestate across 32 clients with spotty connections and questionable bandwidth. The game’s running in one place, and you’re just adding a viewport for each new player. It’s efficient, scalable, and allows for totally new experiences.

The other key feature is leveraging heavy calculation into actual gameplay. Let’s say you want to do a 16-player game with several million rigid-body physics objects bouncing around. Something like that is super-easy to do on 1:N, since all the clients are in one place. The guys in Montreal are working on some amazing stuff that we’ll be ready to show in March.

So yes, the tech works well, but we have to find the fun, and that’s why we’re actively asking for concepts that take advantage of it. By the way, some of the best game concepts we’ve seen have come from people who aren’t necessarily multiplayer designers and don’t know the limitations of the traditional networked game model. They’re essentially pitching us something that should be impossible -- and our response is usually “Yeah, we can probably make that work.”

You must have studied competing systems like OnLive and PS Now. On a purely technical level, how does Shinra's system differ?

Iwasaki-san [Shinra's SVP of Technology] has put tremendous effort into scalability. If we were to follow a model where we allocate a user’s game process to their own single CPU and GPU and then add a video encoding pass, that wouldn’t be economical for us once we’re running at scale; there’s too much inefficiency when the CPU or GPU is idling. If we’re running games in a data center, we need to make sure we’re using the allocated hardware at 100 percent, or as close to that as possible.

So our solution is to physically separate compute from rendering -- Iwasaki-san’s architecture is able to run multiple user processes on a single CPU, then send the drawing instructions across a high-speed interconnect to a rendering server, which draws the frame and performs the final video encode. We’re able to do this very, very quickly.

As we’re doing this, the architecture is constantly sharing resources between game instances -- calculations, texture cache, etc. This results in huge performance gains. Basically, the more users per CPU and GPU servers, the more we start sharing calculations, and the faster the frame time. The best part is that it works on existing DirectX9/DirectX11 code without requiring a rebuild of the game.

In the end, that translates into extremely efficient hardware usage, more instances running per virtual unit, and ultimately, us being able to drive down the price of streaming.

Can you tell me a bit about how the Shinra SDK works? 

We are still early in regards to SDK development. Rather than creating a full kit and hoping people use the specific features we’re providing them, we’re working one-on-one with developers to figure out exactly what kind of functionality they’re going to want for the games they’re making for the system. If someone says hey, I need something that’s going to run fluid dynamics at this level of resolution with X number of clients, we have the team in Montreal get to work. The effort that’s put into establishing those tools and pipelines is going to roll back into a full, supported SDK that’s going to hit much later. But yes, we’re laying the groundwork for it now with our partners.

But what about indies and hobbyists, how are they going to make anything? I came into this job as an indie guy, and the first demo I saw was a ton of mammoth supermicros networked together that ran about $10k each. I was like “oh geez, regular people can’t afford to develop on these things, we’d have to do a cloud-based timeshare system and it’d be like the 1970s all over again.”

Luckily, there’s this guy named Kengo Nakajima who lives up in Toyama -- he’s our evangelist in Japan and is a well-known MMO programmer there. He created a game called Space Sweeper for us, which is a 2D, massively multiplayer twin-stick shooter/crafting hybrid with thousands of moving objects at any given time, and enormous map sizes. It scales up to hundreds of players, and at peak the game is bananas -- imagine a bullet-hell shooter with an insane player count and countless gameplay objects flying around.

The important part here is that Nakajima-san did the entire thing from his laptop in Toyama. He’s got a system in place that allows you to make a cloud game, deploy it locally for playtesting, and then push it up to the data center for testing.

The architecture that his system uses is a bit different from the 1:N model -- some networking savvy is required, but that’s because it’s more akin to old-school PC LAN game programming, where all players are trusted “virtual” clients with guaranteed super-fast connections between each other, since they’re all on the same server rack.

That system is going to become the baseline for the Community Cloud Development Kit (CCDK), which we’ll release later in the year. It’s designed to let people dip their feet in and test the waters in regards to making a cloud game - even if you don’t have a $10,000 supermicro and a superfast connection.

What are the general terms of getting a game on Shinra? What can an independent developer expect to give up, and what can they expect to gain in return?

It all depends on the kind of game you’re making. We’re currently working with a small number of developers; these are teams who align with our vision and are going all-in on supercomputer cloud multiplayer titles. In these cases, they’re getting massive engineering support and collaborative development; it’s obviously in our best interest to make sure these games fully show off what the tech is capable of and ship with as much fanfare as possible.

When you make a game specifically tailored for the Shinra platform, you’re taking on a brand-new architecture that’s going to provide a very new experience from both the developer and player perspectives. Now, you’re giving up the ability to take that experience and port it to other devices, but creating a cloud game is platform-agnostic, so you could argue that you’re able to ship on as many platforms as what the client runs on (think about how Netflix runs on everything), so in a future where remote rendering is the norm it’s a moot point.

On the other hand, if you’re interested in taking a current DirectX-based title and deploying it 1:1 on the platform, that’s a very easy conversation to have -- we’re happy to help put great games on the storefront; we have systems in place for getting catalog content up and running quickly.

In the end, it’s all about the gaming experience, but that said, we feel that the move to the data center is inevitable. First came gamestate and positional data (multiplayer games), then saves (cloud storage), game logic and physics offload (F2P browser/mobile games) -- rendering is the final step.

About the Author

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like