Advanced Micro Devices is taking aim at Nvidia with its new Radeon Instinct chips, which repurpose the company’s graphics chips as machine intelligence accelerators.

Sunnyvale, Calif.-based AMD is following its rival into graphics processing unit (GPU) accelerators for machine intelligence through a combination of both hardware and open source software. The new AI chips are based on the Polaris graphics architecture that AMD introduced earlier this year.

“AMD had to get its graphics house in order, and now it’s going after AI,” said Kevin Krewell, analyst at Tirias Research.

Raja Koduri, head of Radeon Technologies Group at AMD.

Above: Raja Koduri, head of Radeon Technologies Group at AMD.

Image Credit: Dean Takahashi

The aim is to dramatically increase performance, efficiency, and ease of implementation of deep learning neural network workloads. New Radeon Instinct accelerators will offer organizations powerful GPU-based solutions for deep learning inference and training.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

“We view graphics as very strategic over the next five to 10 years,” said Lisa Su, CEO of AMD, in an interview with VentureBeat. “When we started with the roadmap, it was, ‘Consumer graphics is very important. It’s a core user base for us.’ That was Polaris this year. But, no question, our plan was always to compete across the entire graphics end-to-end space. The next phase is compute and making sure we have a very competitive hardware and software platform.”

Raja Koduri, senior vice president of AMD’s Radeon Technologies Group, said in an interview, “We have a lot of work to do in graphics still. We haven’t taken care of it. We have a lot more opportunity if you look at the size of the market and the dollars that are still there. We have a lot more to grow in graphics alone. Compute is exciting, though. It’s great that we finally have a compelling software stack that goes along with our hardware.”

Along with the new hardware offerings, AMD announced MIOpen, a free, open-source library for GPU accelerators intended to enable high-performance machine intelligence implementations. AMD has also optimized deep learning frameworks on AMD’s ROCm software to build the foundation of the next evolution of machine intelligence workloads.

“Instinct is a solid start for AMD and there’s a lot of work to get done and a lot to prove before they start taking business away from Nvidia,” said Patrick Moorhead, analyst at Moor Insights & Strategy. “I like that they not only rolled out cards, but they rolled out platforms and a software stack. Many customers want solutions, not just a bag of parts, and AMD knows this now.”

Inexpensive high-capacity storage, an abundance of sensor driven data, and the exponential growth of user-generated content are driving exabytes of data globally. Recent advances in machine intelligence algorithms mapped to high-performance GPUs are enabling huge progress in the processing and understanding of that data, producing insights in near real time.

Radeon Instinct is a blueprint for an open software ecosystem for machine intelligence, helping to speed inference insights and algorithm training.

Radeon Instinct

Above: Radeon Instinct.

Image Credit: AMD

Radeon Instinct accelerators are designed to address a wide range of machine intelligence applications. The chips include the Radeon Instinct MI6 accelerator based on the Polaris GPU. The passively cooled inference accelerator can perform at 5.7 teraflops at 150 watts. It has 16 GB of GPU memory.

The Radeon Instinct MI8 accelerator harnesses the high-performance, energy-efficient Fiji Nano GPU, which will be a small form factor HPC and inference accelerator with 8.2 teraflops of peak performance at less than 175 watts and 4 GB of high-bandwidth memory.

And the Radeon Instinct MI25 accelerator will use AMD’s next-generation high-performance Vega GPU architecture and is designed for deep learning training, optimized for time-to-solution.

The free, open-source MIOpen GPU-accelerated library is expected to debut in the first quarter of 2017 to provide GPU-tuned implementations for standard routines, such as convolution, pooling, activation functions, normalization, and tensor format.

The ROCm deep learning frameworks will be optimized for Caffe, Torch 7, and Tensorflow, allowing programmers to focus on training neural networks rather than low-level performance tuning through ROCm’s rich integrations.

ROCm is intended to serve as the foundation of the next evolution of machine intelligence problem sets, with domain-specific compilers for linear algebra and tensors and an open compiler and language runtime.

Radeon Instinct products are expected to ship in the first half of 2017.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More