Mark Cerny, the architect of Sony’s upcoming PlayStation 4 video game system, said that PC architecture had finally grown up enough to become the foundation of a sophisticated home console.

In a speech at the Gamelab conference in Barcelona, Spain, Cerny said that the Japanese company had learned that the PlayStation 3 hardware based on the Cell microprocessor was just too complex for game designers. They eventually mastered the technology, but in the early days of the PS3, not enough good games exploited the processor. The slow acceptance of the PS3 led to the eventual departure of Ken Kutaragi, the father of the PlayStation business. Cerny stepped into his shoes as the architect of the PS4.

The PlayStation 3 launched in 2006, and Sony conducted a postmortem in 2007. That process was more collaborative than the initial design of the Cell, which was largely done in secret.

“The obvious path [for the PS4] was to use Cell,” Cerny said. Once developers mastered Cell’s many subprocessors, or cores, they could work magic. But the team decided to look at options with central processing units (CPUs) and graphics processing units (GPUs).

The conventional wisdom was that x86 (the PC architecture based on Intel’s chips) was “unusable in a game console,” Cerny said. “PowerPC was a straightforward architecture.” During the Thanksgiving holiday in 2007, Cerny researched the whole history of x86. He found that PCs from Advanced Micro Devices and Intel could finally be used in a game console. He didn’t spell it out, but Cerny probably meant that the prospect for combination chips had finally arrived. In the traditional PC, the CPU and the GPU were separate chips. But AMD had purchased graphics chip maker ATI Technologies in 2006 for $5.4 billion, and it was in the process of designing chips that contained both the CPU and the GPU. This cut the cost of the chips — the most expensive components in a game console — in half. But this design usually sacrificed performance.

PS4 controller

Above: PlayStation 4 controller

Image Credit: Sony

AMD, however, was working on creating CPU/GPU combinations that could take advantage of being on the same chip. Spurred by the competition, Intel did the same thing. By 2011, both companies were able to introduce combination chips for the PC. By believing in these combos, Cerny had made a good guess about why x86 would work in a game console. In doing this research, Cerny decided he wanted to have a bigger role in the PS4’s design. He successfully made a pitch to become the architect of the machine. In early 2008, the design of the PS4 began in earnest.

Cerny focused on a more collaborative approach. “We started frank conversations with the game team,” he said. He created a questionnaire for the developers outside of the company. It asked them what they wanted to see in a next-generation console. The questions asked what type of CPU, GPU, and other details they wanted. The goal was to create something that would be 1,000 times more powerful than a PS3.

“They were not fooled for a minute by the abstract nature of the questionnaire” and correctly saw that Sony was seeking their opinions for the PS4. Cerny talked to 30 teams, and he received enlightening answers. They were not what he expected.

“They wanted a system with a unified memory,” he stated. This meant that devs wanted the PS4 to have just one pool of memory, not two. PCs and earlier game consoles used two different kinds of memory to feed data to the CPU and graphics. But unified memory would be easier to program. The proper number of CPU cores would be four or eight. Sony eventually chose eight.

Drawing a lesson from the PS3, “they didn’t want exotic,” Cerny said. “If there was a GPU that could do real-time ray-tracing [a sophisticated technique used in ultra-realistic graphics], they didn’t want it for the PS4.” Ray-tracing would have been fascintating, Cerny said, but it would have forced game developers to throw out all they had learned from the last generation of graphics.

Cerny liked those answers. He wanted an architecture that would be easy for developers to use early in the console’s life cycle but sophisticated enough for them to further exploit later on.