Home
News
Products
Corporate
Contact
 
Thursday, November 28, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

Nvidia has GPU card for super computer PCIe slot


Wednesday, June 22, 2016

Nvidia officials are adding to the company's portfolio of graphics processors aimed at emerging markets like deep learning, artificial intelligence and computer vision with a new version of its powerful Tesla P100 GPU.

At Nvidia's GPU Technology Conference in April, CEO Jen-Hsun Huang introduced the Tesla P100, a GPU for data centers built on the company's Pascal architecture and a 16-nanometer FinFET manufacturing process and aimed at high-performance computing (HPC) environments to address new workloads that require high levels of parallel processing. The first version announced at the Nvidia conference was for the new NVLink interconnect technology.

At the ISC High Performance 2016 show this week in Frankfurt, Germany, Nvidia officials unveiled the P100 GPU accelerator for PCIe, an interconnect technology common on most servers. The new chip, which will be available in the fourth quarter, delivers 4.7 teraflops of double-precision performance and 9.3 teraflops of single-precision performance, according to the company. It also provides 18.7 teraflops of half-precision performance with Nvidia's GPU Boost technology.

It will come in two versions—one with 16GB of High-Bandwidth Memory (HBM2) and 720GB/second of memory bandwidth, and the other with 12GB HBM2 and 540GB/second of memory bandwidth. Nvidia said system OEMs like Hewlett Packard Enterprise, Dell, Cray, IBM and SGI are working on systems that will incorporate the P100 for PCIe.

The move to support PCIe is important to making supercomputing capabilities available to more scientists and researchers, according to company officials. Most systems include a PCIe slot, while NVLink, which is faster than PCIe, is less widely available. Nvidia estimates that two out of every three scientists don't have access to the compute cycles they need on HPC systems to do their work.

GPUs are increasingly being used to help accelerate workloads on HPC systems without ramping up the power consumption too much. According to the latest Top500 list of the world's fastest supercomputers released June 20 at the ISC High Performance show, 93 of all systems use accelerators of some kind, with most—67—using Nvidia GPUs.

"Accelerated computing is the only path forward to keep up with researchers' insatiable demand for HPC and AI (artificial intelligence) supercomputing," Ian Buck, vice president of accelerated computing at Nvidia, said in a statement. "Deploying CPU-only systems to meet this demand would require large numbers of commodity compute nodes, leading to substantially increased costs without proportional performance gains."

The Tesla GPU for PCIe accelerators enable the creation of what Nvidia officials call "super nodes" that each provide the throughput of more than 32 CPU-based nodes with up to 70 percent lower capital and operational costs. When running the Amber molecular dynamics code, a server powered by a single Tesla P100 delivers more performance than a 50 CPU-only node, they said.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved