I built this frowning Hiccup using the Premo tool.

Above: I built this frowning Hiccup using the Premo tool.

Image Credit: Dean Takahashi

Wallen: They used to be in an analog world, where they drew on paper. That wasn’t so long ago. Then we dumped them into the digital world and made them act like CAD engineers. Now we’ve managed to put them back in a purely creative experience, an analog experience, where they can see a curve and draw it. They can achieve it in the same way. Their graphic skills, their visual skills, are now immediately reflected by the behavior of the digital medium. That all comes from sheer power.

VB: Does everyone use the electronic stylus now? Or does anybody still use paper?

Wallen: Our storyboard artists sometimes do, but generally they start drawing on a Cintiq, so it’s digitized and easily manipulated. Certainly it’s the case that some of the animators have become so used to a mouse and keyboard that it takes them a while to say, “I don’t have to do that anymore.” It’s been an interesting transition. But all animators have these setups now, with adjustable Cintiqs to get the right ergonomic position.

Baker: It’s been very freeing for them. A few people have to make the transition from working with—They know where everything is on the keyboard. They’ve been trained to translate their creative impulses into these numeric entries. But now, once they’ve spent a couple of weeks to learn how to use Premo, for example, they don’t want to go back. There were a couple of instances where animators had to go backward. They did that animated Christmas card, and a couple of animators had to go back into it. They said, “Oh my God!” They’re completely invested in this way of working. It feels so natural now that to go backward felt like a real slog.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Wallen: One of the things that has allowed us to move so fast with such a radical change is working so closely with the animators. My engineering team’s daily working in with the animators, refining the work flows, as well as working on the underlying architecture. The whole software delivery process was pretty much like a movie, with directors at the helm. People like Fred and Jason and Rex. Their vision was realized on the screen once more. But this time from a work flow point of view.

VB: You talked about how the old platform had roots going back to the ‘80s. With the new one, are you trying to build something with lots of headroom? 10 years from now, can you still be using this in some form?

Wallen: This was one of the core motivators. As you can imagine, changing the software underneath a business that’s making maybe 10 movies at any given time, moving artists around across the globe to bring those out, three a year, is not easy. You only want to do radical changes in a very careful manner.

The first radical change I mentioned occurred before this, moving the production platform into a infrastructure-as-service model. That was already in place. The artists across all the movies were using a common platform. That gave us a target, as well as an operating model. The cloud was a natural part of our environment. It wasn’t something new. It was something to be exploited. It also wasn’t something we were using to somehow reduce the data center. It was there as a tool.

Then Intel came along and said, “Yeah, well, it’s four cores at a time now, but we see 60 over here. Let’s start talking about Xeon 5.” This is an interesting point from a silicon point of view. The question here is, what is the scalability of the computing model? The single-core IA architecture is a very scalable computing model. It’s gotten faster over the years. Multi-core vectorized starts to make you think about other forms of compute. The question is, how easy is it to move code on and off of that, between CPU and anything else? How do you scale it across multiple CPUs? The commonality of the IA chipset across any one of these types of platforms, whether it’s a Xeon 5 with 60-odd cores, or whether it’s a Xeon with eight cores today, is irrelevant from a compute model. We’re able to compile and integrate. We can run Steven’s image on one core, or we can run it on a thousand.

That was critical. But we wanted to put an architecture in place and build an architecture that would naturally scale, regardless of how the platforms changed. We know we can scale the data centers. But we also need to scale at a micro-architecture level.

Baker: We like to think of it N-way parallel, so it’s extensible to the future.

Wallen: Yeah. We’re in no trouble with that. We’ve been through two generations now of putting machines in, running software, and all the goodness comes through to the artists. We use that as a way of adjusting the speed or expectations in the productions. When we put a new machine in, we know how much more effective that is through the software. It’s like turning up the clock.

DreamWorks Animation's How to Train Your Dragon 2

Above: DreamWorks Animation’s How to Train Your Dragon 2

Image Credit: DreamWorks Animation

VB: There have been complaints in recent years as the demand of complex data kept accelerating. Rendering time wasn’t able to come down. Are you able to solve that now?

Wallen: Computing something in some abstract sense is just going to take the time it takes. You can improve that computation and take out all the inefficiencies in it, but in the end you’ve got some work to do. The question about time to compute on a single-core model is literally about the speed of that core, or how many of those inefficiencies you’ve been able to take out.

When you have a scalable compute model, it’s no longer able the individual efficiency of a given piece of operation. It’s about your orchestration. Can you get the data to a larger and larger number of cores to design just how fast you want that computation to take place? That’s what we have achieved. It’s scalability. We can adjust the time frame in which a given task takes.

The reason we can take the proxies, those reduced-complexity characters, out of animation is because it really doesn’t matter how complex it gets. We can always scale to keep the frame rate where it needs to be for the animation process. We can be courageous in designing into the software that ideal workflow. We’re not continuously hedging against whether or not the filmmakers will come up with an idea, like a massive dragon, that will blow the compute budget. That’s a memory question and a compute question.

That’s the feeling that the animators get. I know Dean felt, as he got into the experience of what the production artists were able to give back to him over what period of time, that we can aspire. We really don’t have to think about, “Is it doable?” It’s about what’s best. If there’s one message to take away, if you think about businesses looking at their work flows and saying, “I can decide what’s best? That means I can take massive steps in terms of quality of production, agility, and bottom line.” It’s a big deal.