Wednesday, March 21, 2018
Ten system makers showed servers using IBM’s Power 9 processor here amid expectations of rising sales for the x86 alternative. Their momentum will make at best a small dent in the market Intel dominates, but their targets include one of its most lucrative segments — machine learning jobs in the data center.
Google, an early partner in IBM’s Open Power initiative, announced it is expanding its tests of Power 9 systems. An engineer leading the effort said given the search giant’s investments in the architecture, it hopes to move at least some Power systems into production use this year.
China’s Alibaba and Tencent also are testing Power 9. Tencent said Power 9 is delivering 30 percent more performance than the x86 while using fewer servers and racks.
At least one Web giant is expected to announce production use of Power 9 systems this year. In addition, at least one top-tier server maker is quietly delivering Power systems to one data center, said Ken King who manages the Open Power initiative for Big Blue.
IBM’s corporate aim is to win within four years at least 20 percent of the sockets for Linux servers sold for $5,000 or more, King said. IBM’s Power roadmap calls for annual processor upgrades in 14nm through 2019 and a Power 10 slated for some time after 2020--leaving room for a possible 7nm chip in 2020, shown two years ago.
Power 9 should do better than its predecessors given its costs, bandwidth and ease of porting. Power 9 is IBM’s first to use standard DIMMs, opening a door to other standard components that are overall cutting system costs by 20 to 50 percent compared to the Power 8, IBM’s partners said.
The proprietary NVLink 2.0 that can connect Power 9 to multiple Nvidia Volta GPUs provides a bandwidth edge over the x86. Many of the new Power 9 systems aim to leverage the Nvidia GPU’s dominance in training neural networks to win adoption in large data center operators for AI jobs.
Indeed, one of three areas where Google sees promise for Power 9 is as a superior host teamed up with an accelerator such as its TPU. Power 9 also supports many cores and threads, factors closely tied to performance on Google search tasks, said Maire Mahony, a Google system engineer who serves as treasurer for the Open Power Foundation.
At a separate event, IBM announced it is making its Power 9 servers with Nvidia GPUs available as a cloud service for deep learning jobs. It claimed four of the new servers beat 89 Google Cloud servers by 39x in a terabyte-sized AI advertising benchmark.
The shift with Power 8 to an x86 like little-endian structure is giving the architecture a software boost. Developers said Linux x86 applications can now be recompiled to run on Power, sometimes with no other changes.
A short tour of some of the latest systems
A mix of second- and third-tier server makers such as Atos, Inspur, Supermicro and Wistron showed Power 9 systems at the Open Power Summit here. Cavium, Mellanox and others showed cards plugging into its OpenCAPI bus, while Broadcom and others showed cards for its PCI Express Gen 4 interconnects.
Inspur, one of IBM’s most bullish OEMs, believes it could sell 500 Power 9 systems in China this year and perhaps 2,000 next year, mainly targeting Web giants. Wistron, an unbranded design arm of Taiwan’s Acer and an IBM partner since the Power 5, believes Power 9 will drive double-digit growth in its Power business that makes up about 5 percent of its total revenues.
Hitachi's efforts are focused on national research efforts in Japan that are long-time customers of its big Power systems. It offers a Fortran compiler for Power 9 that can optimize parallelism and will sell systems overseas.
Raptor Computing Systems showed Power 9 desktop, workstation and servers enhanced for security leveraging the chip’s open firmware specs. System design delays forced the cancellation of Power 8 plans for both Raptor and data-center specialist Rackspace, but both companies were bullish on their new Power 9 products.
Some of the third-party systems are available now, with others coming by July. For its part, IBM started shipping its first Power 9 server in December and plans to release six more by the end of the month.
After Oracle cancelled development of its Sparc processors, IBM and ARM are the last major providers of alternative architectures. However, AMD’s resurgence over the last year with its Zen x86 processor has dulled some of the edge of the need for a second source.
IBM shifts from licensing IP to selling CPUs
When IBM launched the Open Power intuitive about five years ago, it thought most of its customers would be chip designers. Now it believes its customers will be almost entirely made up of systems OEMs.
The Power-compatible chip “path is still there, but [given high chip-design costs] it’s not the big differentiator...now I/O is the differentiator and cores and caches have become the plumbing," said Brad McCredie, an IBM fellow who heads up Power system development.
Today Suzhou PowerCore remains the only announced Power-compatible chip maker. IBM’s King said other chip deals are in the works, mainly for organizations that serve government users including an exascale supercomputer project in Europe.
China’s microprocessor clones have generally not had strong market traction, said one Inspur manager. The ARM-based Phytium from China Electronics Corp. like the PowerCorp chip, generally serves China government users. An x86-compatible from Zhaoxin in Shanghai has gotten little traction despite a design win at Lenovo, he said.
In the U.S., IBM is building Summit, a 200-peraflops Power 9 system for Oak Ridge National Labs. It will pack 4,608 nodes, each with two Power 9 CPUs and six Nvidia V100 GPUs when it goes live late this year.
“New supercomputers are not necessarily faster, but they are wider,” said Jack Wells, a science director at Oak Ridge, pointing to the high bandwidth of the 13-megawatt system.
Ride-sharing company Uber aims to be one of Summit’s early users. It will use the giant system to run Horovod, a library for distributed deep-learning frameworks. Uber researcher Alex Sergeev described it as “a first exascale deep learning workload,” exercising the systems three exaflops in 16-bit peak floating-point performance.
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|