Friday, September 14, 2018
After giving its longtime gamer fans the first shot at its newest graphics-processing units, Nvidia Corp. is next looking to use its new Turing technology to boost its artificial-intelligence efforts.
At a keynote speech in Japan on Thursday, Chief Executive Jensen Huang announced a new platform for AI inference in the data center that uses the Turing GPUs Nvidia first announced for gaming and graphics cards last month. Nvidia NVDA, +1.44% also announced new Turing-based systems for machines besides autonomous cars, with an eye on manufacturing robots and health-care machines that crunch lots of data.
While Nvidia has become a leader in machine learning in recent years, that has mostly been through the “training” half of the AI equation, as researchers use the processing power Nvidia GPUs provide to feed data into machines and help them “learn.” On the inference side, in which the knowledge is used by machines in real time, Nvidia announced a new Tesla T4 chip for servers and TensorRT software meant to boost inference capabilities in data centers.
“Our AI inference market has been exploding,” Ian Buck, who runs Nvidia’s data-center business, told reporters in a pre-briefing on the news Wednesday afternoon.
The ability to develop a strong hold in inference as well as training is important for Nvidia “because training scales with development, while inference scales with the actual implementation that comes later,” Morgan Stanley analysts wrote earlier this year, meaning that Nvidia needs to also be a part of inference to gain long-term, sustainable rewards from its work in training.
Morgan Stanley upgraded Nvidia to overweight in that April note specifically because its analysts believed that Nvidia was making stronger inroads on the inference side of AI, which tends to involve larger volume but smaller margins. Nvidia on Wednesday said the inference market will be worth $20 billion in the next five years.
“We now believe that developments in hardware and software have positioned Nvidia to capture a higher portion of inference, key to the long-term growth rate,” the analysts wrote in April.
Nvidia said that Alphabet Inc.’s GOOGL, +0.16% GOOG, +0.14% Google had already agreed to deploy the T4 chips in its data centers, and noted in a news release that several other important cloud providers and server companies — Including Microsoft Corp. MSFT, +0.14% , Cisco Systems Inc. CSCO, +0.75% , Dell Technologies Inc., International Business Machines Corp. IBM, +0.10% and Hewlett Packard Enterprise Co. HPE, +0.85% — had voiced support for the platform.
“AI is becoming increasingly pervasive, and inference is a critical capability customers need to successfully deploy their AI models, so we’re excited to support Nvidia’s Turing Tesla T4 GPUs on Google Cloud Platform soon,” Chris Kleban, product manager at Google Cloud, said in the release.
Nvidia also announced new Jetson AGX Xavier high-performance computing systems meant for autonomous robots and other machines, as it continues to look for ways to leverage autonomy apart from cars. The announcement was geared toward the audience for the Japanese version of Nvidia’s GTC conference, and included partnerships with Japanese companies like Yamaha Motor Corp. 7272, +2.59% , which Nvidia said planned to use the platform on “unmanned agriculture vehicles, last-mile vehicles and marine products.”
Additionally, a special version of the AGX specifically designed for the health-care industry was announced with accompanying software, with all those products dubbed Clara. Nvidia announced its intent to focus on medical imaging and other data-heavy health-care tasks at its main GTC conference in Silicon Valley in March.
Copyright © 2018 CST, Inc. All Rights Reserved