Email
Password
Remember meForgot password?
    Log in with Twitter

article imageNvidia system fuses AI for high-performance computing

By Tim Sandle     Jun 24, 2018 in Technology
Nvidia have introduced a cloud server platform which is described as the first unified computing platform that can tackle artificial intelligence together with high-performance computing.
Nvidia (Santa Clara, California, U.S.)’s new system is the HGX-2 cloud server platform. The set-up includes multi-precision computing capabilities which are designed to provide greater versatility, lined-up to meet future improvements with computing. HGX-2 is a part of the larger family of NVIDIA GPU-Accelerated Server Platforms, an range of different server classes.
Increasingly more applications are being developed which seek to combine high-performance computing with artificial intelligence. High performance computing is the activity of aggregating computing power in ways that deliver a better performance than could be generated from a typical desktop computer or workstation. This means undertaking tasks like solving complex problems, such as the type that relate to fields in science, engineering, and major businesses.
With the Nvidia system, this includes undertaking high-precision calculations deploying FP64 and FP32 graphics processing units, which are needed for scientific computing and simulations. The system also allows for FP16 and Int8 graphics processing units, which are needed for artificial intelligence training and inference.
Speaking with EE News, Jensen Huang, who is the founder and chief executive officer of Nvidia, said: “The world of computing has changed…CPU scaling has slowed at a time when computing demand is skyrocketing. NVIDIA's HGX-2 with Tensor Core GPUs gives the industry a powerful, versatile computing platform that fuses HPC and AI to solve the world's grand challenges."
The new system is designed to function as base for industry to develop advanced systems for high-performance computing and artificial intelligence. As an example of the required power, Nvidia says its platform can achieve artificial intelligence training speeds of 15,500 images per second.
Where greater power is required, the platform can use an interconnect fabric to hook up 16 Nvidia Tesla V100 Tensor Core graphics processing units so that they work together as a single, giant graphics processing unit. This can deliver two petaflops of artificial intelligence performance.
In related news, other leading server makers: Lenovo, QCT, Supermicro, and Wiwynn plan to bring their own HGX-2-based systems to market later in 2018.
More about Nvidia, Artificial intelligence, Computing, Cloud computing
More news from
Latest News
Top News