By giving Xbox One its mandatory early visual powerhouse – despite all the doomsaying about the hardware’s horsepower – Ryse was arguably the console’s most important launch title. It disrupted the narrative, leaving many players clueless as to what the platform can really achieve. It’s certainly hard to imagine more on day one in terms of presentation and ensemble performance capture. As technical art director Christopher Evans and cinematic director Peter Gornstein explain, achieving such results involved risks, experimentation and lots of research.
How long ago did you suspect that the new generation would start at sub-1080p?
Christopher Evans: When we set out our pillars of the game very early, we didn’t even think about what resolutions we were going to do. We really wanted to focus on the characters, the emotion. Those were things we’d never really conquered before. Resolution is more a gamble of numbers – a sliding thing for us. For the Crysis franchise, we’d built a really intelligent upscaler, and most people didn’t even know it wasn’t running at a specific resolution. We just felt that when you play the game and see the images that are generated, there’s going to be another discussion happening.
Are those pillars different when dealing with a console launch title?
CE: The tech core pillars are always built off of the game’s core pillars. That meant we had to make a decision about the characters. For us, the next gen was going to be about having amazing characters in the game as well as in the cutscenes. That ‘play the cutscene’ idea really made us have to rethink. How are we going to do facial setup? How are we going to set up levels of detail? It was difficult enough that it ran the entire course of the project, and there were many times I was told that what we wanted to do for faces and characters was, tech-wise, one of the riskiest things on the project.
One of the things that I found myself defending a lot was the idea that, yeah, we’re going to take tons of scan reference of the actors themselves and replicate their exact performance, but the characters are going to be sculpted by human beings [in] an artistic process. The world, the armour, the face: everything is consistent. It’s not slapping a bunch of photo textures everywhere. When we did our facial scanning, we actually drew lines on the faces so I could check the skin sliding and stuff, so we didn’t even have the actual scan data with diffuse textures for the project.
We did a lot of reference. We had a photo reference pipeline, we went to Italy, and we were able to pull meshes with normal maps and everything off of our reference photography. But in the end, it just looks really eerie. Sometimes, if you do 3D scanning of a face, you get a face that looks like a moving video, but then the world doesn’t look like that. And you’re not able to populate the world with all of that stuff.
Do you think the shock of inconsistent texture resolution should be a thing of the past now?
CE: There’s a couple of things that play into that. Number one is the fact that it’s an artistic problem. We call that texel consistency. We have a way that we build the game where it shows everything as checkerboards, and a checkerboard has so many checks per metre, and if there’s a stretched texture that makes it look lo-res, [then] it looks lo-res to us.
A lot of people are looking at the hardware on PS4 and Xbox One, and wherever I talk about that, I try to stress that hardware is hardware, and hardware right now outstrips teams’ abilities to fill that RAM with assets. We have a team that’s been building high-fidelity assets for a long time. I think that in the future the team makeup and the pipeline and process that the teams use are going to matter much more, because you’re going to hit this problem where there’s so much RAM. You want a pipeline that allows an artist to ZBrush that trashcan in the corner so that it’s consistent with the awesome character and the awesome room and everything.
A lot of it is building outsourcing pipelines to let you build a prototype in-house and then build a pipeline – we built Marius’s face in four months, and then we had to build 25 more in four months. And that’s going to be the nut to crack.
Peter Gornstein: It’s almost like an aircraft assembly plant, right? You’ve got to find the vendors all around the world that are expert at making that part, and the real trick is making sure it fits in when everything gets assembled.
You’ve notably used prerendered cutscenes in Ryse. Why do that when the engine is so capable?
CE: This is a funny thing for me, because my entire rigging pipeline is predicated on the idea that I have to build rigs that can blend in and out of cutscenes seamlessly. So I would go to Peter and the guys and say, “Hey, I see this loading video is now scheduled to be prerendered. Why?” We talked about it and it was, “Well, we don’t want players to be waiting. If we’re rendering a scene live as well as trying to load the next scene, the engine will take probably three or four times longer.” In the end, we sided with the gamers. We didn’t feel they should have to wait through a big loading time.