Mark Cerny, architect of Sony’s upcoming PlayStation 4 gaming console, made some interesting and frank observations about the mistakes that Sony made with the launch of the PlayStation 3.
In a speech at the Gamelab conference in Barcelona, Cerny acknowledged Sony’s mistakes with the PS3, and he said that these experiences explain why Sony is taking a more collaborative and simpler technology approach with the design of the PlayStation 4. Here’s part one of our coverage of Cerny’s talk.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":772149,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,","session":"D"}']Sony’s PS3 foibles are well-known in the video game industry, and they explain why the company fell behind both Microsoft and Nintendo during the last generation, after dominating the preceding era with the PlayStation 2. Sony representatives have rarely discussed the criticism and details behind those mistakes. While Cerny led the design of the PS4, which comes out this fall, he is a consultant at Cerny Games and isn’t a full-time Sony lifer. That might explain why he was more frank in describing the PS3’s problems and how they contributed to an improved design for the PS4.
Cerny can talk about these issues because they happened a while ago, and, for the most part, they weren’t his fault. Consequently, people can lay the responsibility for the success of the PlayStation business and the weaknesses of the PS3 squarely at the feet of Ken Kutaragi, the father of the PlayStation business at Sony.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
The PS3 project started auspiciously enough in 2001 when, at the peak of Sony’s success with the PlayStation 2, Kutaragi announced that Sony, Toshiba, and IBM would collaborate on the Cell microprocessor that would become the heart of what would become the PlayStation 3. Hundreds of engineers designed the chip over several years, and it represented a radical departure from typical single-graphics-chip, single-processor blueprints. The Cell had eight cores, dubbed Special Processing Elements (SPEs). It was powerful but complex.
Shuhei Yoshida, then head of Sony’s game studios in the U.S., received approval to embed a team of game programmers — including Cerny — inside the PS3 hardware team to explore game creation. Cerny became a member of a team dubbed ICE, which stood for the Initiative for a Common Engine, whose job was to envision the titles of the next generation. Yoshida’s idea was to get games in development as much as a year earlier in order to be ready for the launch. It was a good thought, but, in reality, it wasn’t early enough.
In the summer of 2003, Cerny went to Japan to study the Cell. He had expected “something from a James Bond movie” but found that a small number of people was driving the project. The Cell design was already done.
Cerny looked at the documentation behind Kutaragi’s design. He saw that the chip was powerful but only if you could really master the SPEs.
“The [SPEs] had huge potential, but huge effort was required to program them,” he said.
You had to take an operation and break it down into subroutines and then dispatch each to a subprocessor. Once you learned how to do that, it was like solving a very complex puzzle. Cerny admired the technology but didn’t realize it would lead to a console that would be too expensive.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":772149,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,","session":"D"}']
“I stayed focused on how to best use the chip that had already been designed,” Cerny said.
Cerny said it was exciting to work on the new hardware but scary because it was hard to figure out how to make the most basic tasks work. For Sony’s first-party team of internal game developers, the early insight was a huge advantage. Thinking only about their own interests, the Sony dev teams thought about how they would have a “tremendous lead over third parties” who would not learn about how to program the machine until much later. They didn’t understand at this time that this would become the console’s main weakness.
“We were thinking about our own game titles for SCEA [Sony Computer Entertainment America] in the U.S., not the platform at all,” he said.
By early 2005, the focus shifted to creating launch titles for the PS3, which had a holiday 2006 launch. But game makers found very little support. Sony’s engineers had not yet created a quality debugger for the SPEs. A low-level graphics driver (code that helps titles talk to the hardware) did not exist and neither did a graphics chip debugger or performance tools. The first-party game developers were having a hard time, and the third-party teams were even worse off. But Sony eventually realized that third parties were essential to the success of its system.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":772149,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,","session":"D"}']
Cerny figured out that it took six months for teams to create an engine that would enable the prototypes that were a necessary part of finishing games. That compared to three-to-six months for the PS2 and one-to-two months for the original PlayStation. The new technology delivered gorgeous final releases, but the complexity had gone up an order of magnitude.
The result, Cerny admitted, was a “weak launch lineup.”
He said, “Anyone who lived through those times understands the need for international communication, the value of frank and open conversations, software tools, and the role of third parties.”
Cerny didn’t disclose everything that went wrong with the PlayStation 3. One of the biggest crises came as the team tried to figure out how to program the Sony-designed graphics chip. The complicated hardware didn’t take into account a revolution that had happened in PC gaming, where graphics chip maker Nvidia had pioneered a new technique dubbed “programmable shading.” With it, developers could run a graphics program on every single pixel of a game scene, allowing for much greater complexity in 3D images.
[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":772149,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,","session":"D"}']
Sony scrapped its in-house graphics chip and, at the last minute, signed a deal with Nvidia to provide its RSX custom graphics chip for the PS3. Cerny glossed over the big change in plans, but he acknowledged that the team had to “scrap” a lot of work. This, along with the decision to include a Blu-ray media player in the PS3 led to a considerable delay in the launch of the console. Overall, the cost of the Cell and the accompanying technology forced Sony to price the initial machine at $599. It launched in 2006, a full year after Microsoft’s Xbox 360 debuted.
At first, Sony’s game lineup was weak. Microsoft closed the gap in both technology and game quality, but Nintendo surprised both with the launch of the motion-sensing Wii game console. In 2010, Microsoft made a comeback with the launch of its Kinect motion sensor, and Sony lagged behind. It went from complete dominance with the PS2 to third place with the PS3.
Did Sony learn from its PS3 failures? We’ll find out this fall.
Here’s Cerny’s full talk.
[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":772149,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,","session":"D"}']
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More