Chipmaker Nvidia today announced two new flavors of Tesla graphics processing units (GPUs) targeted at artificial intelligence and other complex types of computing. The M4 is meant for scale-out architectures inside data centers, while the larger M40 is all about impressive performance.
The M4 packs 1,024 Nvidia Cuda cores, 4GB of GDDR5 memory, 88GB/second of bandwidth, power usage of 50-75 watts, and a peak of 2.2 teraflops.
The more brawny M40, by contrast, comes with 3,072 Cuda cores, 12GB of GDDR5 memory, 288 GB/second of bandwidth, power usage of 250 watts, and a peak of 7 teraflops.
These new GPU accelerators, based on Nvidia’s Maxwell architecture, are the successors for Nvidia’s Kepler-based Tesla K40 and K80. (Specs are here.)
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
The GPU has become a recognized standard for a type of AI called deep learning. In the past year, Nvidia has increasingly pushed hard to market itself as a key arms dealer for deep learning. For years, Nvidia had marketed its Tesla line of GPUs under the term “accelerated computing,” but in its annual report for investors this year, the company changed its tune and began emphasizing Tesla’s deep learning capability.
In addition to coming out with the new GPUs — and accompanying performance benchmarks for the Caffe deep learning framework — Nvidia is also introducing its new Hyperscale Suite of software, including the cuDNN library for building applications with deep learning, a GPU-friendly version of the FFmpeg video processing framework, and an Image Compute Engine tool for resizing images.
The Tesla M40 and the new software will be out later this year. The M4 will become available in the first quarter of next year.
More detail on the news is here.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More