James Knight, the virtual production director at AMD’s Radeon Technologies Group, sits on the edge of a cusp with a very good view of the future. His unique perspective of all things virtual reality, combined with years of experience in film production (Avatar, The Amazing Spider-Man, The Incredible Hulk), has allowed him to get directly involved with Hollywood content creators, as well as video game developers. According to Knight, this new immersive era is not a bluff. It’s coming, and it’s going to be big. All of the major studios are seriously behind VR now. If you want to create a VR experience, you need a GPU (graphics processing unit) to produce it, and the consumer needs a GPU to consume it. So AMD, a company that develops graphics, processors, and immersive VR solutions, finds itself poised for progress in an incredible time of growth for one of the most intriguing currently developing technologies.
We spoke with Knight and John Swinimer, the PR manager for the Professional Graphics Group at AMD, as they prepared for this year’s SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) conference in Anaheim.
S&P: It’s a pleasure to speak with both of you. How are things?
James Knight (JK): It’s been hectic preparing for this SIGGRAPH event. They’re announcing a lot of big stuff. It’s their best audience: visual effects professionals, content creators, animators, people using game engines, decision makers, influencers, and artists. They come there to discover new tech and to see how people are using tech. It’s a great thing for us.
John Swinimer (JS): The group we work with produces graphics technology to help content creators and visual effects teams create amazing visuals. Earlier this year, we introduced a new product to the family lineup called the AMD FirePro W9100 [professional graphics card] with 32 GB of memory on board, which you might not think is so groundbreaking. But having that extra memory is contingent on how much imagination can be produced, having that limitless toolset. James can speak to the demand behind the scenes to keep up with these imaginations, especially regarding VR.
JK: What you’re seeing from a bird’s-eye view — if you have a film, you don’t need a GPU to consume it. If you’re doing a visual effects film, you do need a GPU to create it. You need to be able to compose graphics and work with game engines and NUKE, Maya, and all that. But if you’re simply showing a movie, you don’t need that. With the dawn of VR, you are seeing the need to create and manipulate huge data sets at the beginning and at the end. If you’re using a game engine and doing something that’s true VR and that has to be rendered in real time, you also need a GPU to consume it.
So it’s a great time for us. As a company, we’re obviously keen to get more involved in that. Now we’re with the content from the start all the way to the end, even with the consumption. And part of the reason for the 16 GB version along with 32 GB version is because it gives the users the options for high-end creation and/or high-end consumption. There are some high-end VR experiences coming that will require users to have memory right on the GPU — memory with quick access that bypasses the PCI bus and that will just be right there on the GPU.
That’s part of why we’re seeing the uptick, is this demand for higher memory directly on the GPU. Then you’ve got real-time visualization on high-end feature films that are using real-time computer graphics with textures, lighting, animation — they’re streaming it all to the viewfinder to eliminate guesswork on the back end, for post. It’s like, “Oh, I guess we can put the alien here,” if that makes sense. Well, they know on the day where things are in the shot. They have a rough idea, because they are able to stream something approaching final into the viewfinder now, because they have this GPU with a fairly high amount of memory right on it.
S&P: How does all of this intersect with crew workflow?
JK: It is constantly emerging. We as a tech and chip company can make products, and we have an idea of what we think they’ll be used for. But what’s great about the community is that they push things and manipulate things in ways that we haven’t necessarily thought of. So that’s part of the impetus for us to have an office now in L.A. We have eight of us here in L.A., and we’ll be expanding that. We’re listening to the creators, hearing directly from them what they want, all the pie-in-the-sky things they desire. And we’re working with content creators to align ourselves with great and ambitious projects, and then new technology comes out of that.
S&P: So you and your department are kind of acting as a bridge between the engineers and the creators?
JK: Yes, that’s an accurate and succinct way of putting it. We’re here in the heart of Hollywood, and what we’re doing is working with — well, there’s a lot we can’t say for the moment. We do have some big upcoming reveals. But we’re aligning ourselves with certain pieces of high-end content, or certain studios, for various reasons, because they have bespoke needs. Like they want to be able to do something, so we develop a piece of technology with a certain studio to do that special something. Then after that, that new piece of technology is now available to the community, and it makes people’s lives a little easier. So, we’re doing that.
Also, when you are looking to make a decision on purchasing the guts of your computer, it can get interesting. As consumers, we make purchasing decisions for rational reasons, and we make these decisions for emotional reasons, too. So at AMD, we’ve attached ourselves to certain pieces of content that will resonate with people. That’s why I’m here working with Fox right now, and we are attached to Assassin’s Creed, for example. But why would a GPU company want to attach themselves to that? Well, we’ll bundle it. That’s the idea. We’re working with game creators and film creators.
The biggest reason for opening an office here is virtual reality and 360° video. People need GPUs more, so that’s the reason, the conduit. But the side effect of that is that we want to bleed more into high-end content creation that isn’t just virtual reality. We’re working with visual effects studios to help them visualize their projects. We’re trying to work and help out in all aspects of it, from that perspective.
The truth is, if you’re going to edit something, if you’re going to do any visual effects, you have to talk to either Nvidia or us. You need a graphics card. At AMD, we’re all about innovation. My father once told me that the world belongs to the discontented. I think it’s fine to be happy, but when you get content, then you get lazy. We’re certainly not that. We’ve completed one thing, and now we’re immediately on to the next. We’re pretty forward-thinking, and we think it’s a very forward-thinking move to have a presence here in Los Angeles, the epicenter of content creation — and especially, as we’ve found, the epicenter of virtual reality content creation. We’re doing a good job out here, and proving to people that we’re worth a damn, because we are.
JS: You’ve probably been familiar with AMD with graphics on the consumer side, and certainly we’re in the game consoles. There seem to be a lot of sharable assets when it comes to creating games, and that digital content can be repurposed for visual effects and that sort of thing. We’re seeing a transition now, not only in the creation but also in the reusing of assets.
JK: Exactly. Well, you create your assets once. You create your geometry, and it’s used for everything. So you create assets for previz, but you don’t throw them away. You don’t want to rebuild them. We reuse them throughout the entire production. We’re seeing that as well with game development, this reusing and sharing of assets. More and more often, game developers and film studios are sharing this assets with one another.
People are thinking more efficiently. Budgets are always getting tighter. We’re just affording people more efficiencies and trying to make people’s lives a lot easier. We’re focusing on the tech so they don’t have to worry about “How to do this?” and all of that, so they can focus on their craft — acting, directing, producing. And we’ll take care of the rest. We want to be the Staples “Easy Button” of visual effects and VR content creation.
S&P: At Sound & Picture, we’re definitely interested in all things collaboration. What AMD collaborations can you talk with us about?
JK: Keep an eye on the AMD news that will be coming out over the next year. We’ll be collaborating with content creators and studios. Primarily, we want to sell our graphics cards, but we want to align ourselves with ambitious and rich IP, to be able to do that. And if high-end content creators that we collaborate with are comfortable working with us, and if we’ve made their lives easier and their vision possible, then surely the fanboys and consumers will feel comfortable buying our stuff to play their games or consume their VR content, because it’s already been tested and created on our hardware anyway.
We’re contributing a lot to the community, as well. We have GPUOpen [an AMD initiative designed to enable developers to create games, imagery, and GPU computing applications using no-cost and open development tools and software], and there are new plug-ins. These are bits of software and algorithms that we have slaved over to make people’s lives easier from a graphics perspective, and we’re giving this all away to the community. For example, FireRender. It’s open source. It works with our competitors’ products. It’s shackles off. You have a choice now. If you use an Nvidia card, you can still use FireRender. We’re not tying you to our GPUs.
JS: FireRender is a retracing tool that helps a lot at making the scene look more realistic. VR is obviously still in its days of infancy. And another thing, with VR, sound is everything. Sound engineers and designers create these wonderful encapsulating environments that you can walk into and hear everything, and it’s all around you.
But the visual is the thing that will sometimes pull you away. If you don’t have the high-rendered visuals in front of you, it looks like a cartoon. The goal at AMD with the graphics is to make it as realistic as possible. The VR audio aspect is a checkbox that’s been marked. We’ve got that part. It’s done. But getting the visuals done in a high resolution, that’s the next step.
JK: Yes, it is. Content is all about suspension of disbelief. And if the content you’re consuming has latency or hiccups or whatever other aberrations, then that will take you out of the experience. To maintain that crucial suspension of disbelief, it has to be fluid to be believable. We’re fully cognizant of that, and that’s another reason to partner with people, to make sure their content runs smoothly, so that it won’t distract the user from the experience.
S&P: James, give us a brief overview of your extensive background.
JK: Currently I’m the virtual production director at AMD. I’ve been working in visual effects for about ten years, as well as virtual production and on-set visualization. Last year, AMD collaborated with myself, and a few others, to work on a project where we recreated the Wright Brothers’ first flight as a VR experience, using visual effects and performance capture and then putting it all in a game engine. Again, that experience has to be rendered in real time to consume. We all got to know each other working on this together. Like I had said, AMD had been wanting to expand into Los Angeles and more into content creation, so that’s how that happened. I’ve always been a technologist. I’ve always been fascinated by it. It’s been really great to work with AMD on all these visual effects projects.
I was the project manager for the performance capture on the film Avatar for three-and-a-half years. I also worked on The Amazing Spider-Man, some other films, and some video games. I’ve enjoyed that, but production is fairly grueling. It’s fourteen-hour days, sometimes seven-day weeks. I have a little boy now, and I’d love to see him grow up. Now working with AMD, I don’t actually have to work fourteen-hour days and seven-day weeks. So it’s a brilliant thing for me. I still get to work with the people I know and love, and touch certain projects. And I’m getting to help AMD shape their vision and work with the studios a lot more efficiently. It’s a fascinating time.
S&P: What are some of the latest and greatest breakthroughs that you can tell us about?
JK: I know what’s coming, and surely there’s plenty I can’t talk about at the moment. But I really think we’re seeing an uptick. It’s not necessarily one event, but if you were to do a chart on where VR is in its life cycle — it’s just about to rocket. We have noticed that up to this point, it was technologists, enthusiasts, and college students that had brilliantly brought VR to the point it is now. And others used it at various points throughout the past few decades. But never to this level. Now you have these new innovators who have taken it to this breakthrough point where it’s an actual medium. What we’re seeing currently is that real storytellers and real content creators and real crew members and actors are all getting involved in the medium. So we’re seeing a groundswell that we’ve never seen. This is a much, much larger groundswell than we ever saw for stereo 3D.
Now it’s like the next chapter in VR. The professionals are standing on the shoulders of the technologists and the enthusiasts and college students who brought us this far. We’re not decrying this. It’s organic. Like with any piece of great technology, you need that phase. And then you’ll see more consumers take it on, because you’re going to have more inspiration to buy the tech. If you see a VR demo that doesn’t do much for you, then you won’t go out and buy a new graphics card to run VR, or you’re not going to go buy an HMD [Head Mounted Display]. But if you hear that a so-and-so director is involved with something or some big movie that you love, and if you’d be fascinated to see a piece of VR narrative related to that movie, then you might be more motivated to buy that card and to buy that HMD.
Also, education and medicine will do a lot with it. We’re seeing a lot of that, and there’re plenty of upcoming announcements about that. That’s an event in and of itself: people wanting to use VR for education. It’s one thing for a student to read about a piece of history and to know about it, but if you experience it because you’re put there by virtual reality, it goes to a different area of your brain. You really, really remember it. There’s a big difference. It’s not like learning something and then forgetting it two months later.
We saw this with the Wright Brothers’ experience that we produced. If you witnessed that experience, you wouldn’t have to think about it: you’d automatically remember it from a deep place. You could probably tell us how many people were there, how far they flew, what the weather conditions were like, and so on. So I do think that recreating history and other educational aspects will be a big part of VR.
As John said, sound is a big deal. It’s half the experience. It’s sound and visual. Without sound, it’s just flat. It’s not emotional. There’s some great technology coming out in sound for VR, like directional audio, for example.
S&P: What are the special challenges of recording sound for VR?
JK: Well, there’s a VR headset for a full VR experience. Then there’s 360° video, camera-caught, which isn’t technically virtual reality — it’s reality capture. Recording audio for both experiences will be different. As far as the technical details, at AMD, we’re not a microphone business, so I can’t answer many of those questions directly. But we’ve been creatively working with game companies and virtual reality companies that are popping up all over the place, working with them on audio and getting them to process the audio on the GPU. And maybe there are certain algorithms that we’re working on that deal with directional audio inside game engines that we’re supporting. There’s a lot I can’t say about this stuff. But having the audio on the GPU is something that’s very big, and that we’re very much considering and working on.
There’s significant innovation in this space. Practically every major studio is taking the VR medium very seriously, and it would be silly to discount audio’s importance in virtual reality, because it’s immersive. Without the audio, then you put on your HMD and you can still hear everyone in your office or your kids running around in the background. You need good audio and headphones to fully immerse yourself into VR. Again, it’s all about suspension of disbelief. As people supporting content creators, we want to be able to aid that suspension of disbelief, and as John said, help make those graphics perfect and run smoothly.
S&P: It’s interesting to hear that the audio will be processed on the GPU for VR experiences.
JK: Absolutely. You want the graphics and the audio to live there in sympatico, so they can all be pulled from in real time. When you turn your head and look at certain things in your real life, then your audio changes. We want that experience to be no different in virtual reality. So it’s best that all that stuff lives together there on the GPU.
S&P: What are some of the biggest ways all of this new technology will affect TV and film crews, and game developers?
JK: Well, for one, from the aggregate view, the idea is to make people’s jobs easier — easier for you to see a representation of your final product. The tech we’re producing at AMD gives you the ability to — if you’re working on an animated film, for example — shoot it as if it were live action, with actors in performance capture suits and your CG environments. It’s more commonplace now, and we’re making it much easier for people to have what is called a Virtual Camera. Think of it like an iPad, something like a tablet that has video game controllers on the side. You’re on a performance capture stage, and you look through the Virtual Camera. You can move it around and shoot a CG or animated movie as if it were live action.
You’re able to shoot things in a more organic way. There’s something about having handheld shots. It draws the audience in. That’s the idea behind the Virtual Camera, to make it more organic. It’s not just an animator who’s sat at a computer with the director over his shoulder, deciding where to point the camera. It’s now someone on a set — on a CG film — holding the physical camera, which gives a much more organic feel. Go back and watch Avatar and just be cognizant of the camera moves, and you’ll see what I’m saying. It’s much more organic, like in the jungle scenes.
So that’s one way in which it affects filmmaking. And the ability to see things with more finality, or striving for photoreal, on set, and making that easier. Our role is to make content creation easier for the filmmakers and video game makers. We’re making post production shorter. We want to be involved in that, and with the blending of pre production, production, and post. With the dawn of video games and visual effects and game engines, those three pillars of the production process — all of a sudden those lines are blurred. They’re sort of one iterative process, and AMD is here helping that process and making it more efficient. We sincerely realize that the human brain has wonderful, unlimited imagination and ambition, but the budgets aren’t always going to be unlimited. So we’re about making things more efficient and more cost-effective. And having fun, too! Everybody likes fun.
The reactions we get when testing out some of the latest tech are great. I don’t know if you had that experience of the first time you ever rode in a really fast car and it accelerated, and how it makes you giggle. That’s the reaction that we got from this new tech, which we’ve yet to announce. They were just giggling. It’s so cool to see that. It’s very nice when you can bring this to the set and have them giggle at what we just created. Then they go off and think about, “Oh, we could do that or this with this new thing.”
Learn more about AMD: http://www.amd.com/en-us
Photos courtesy of AMD