In this article, the editor will introduce a top computing card recently released by NVIDIA, the Tesla V100S. What is its specific situation? Let’s take a look.
Tesla V100 was first released at the GTC 2017 graphics conference in May 2017. Based on the new Volta volt architecture, GV100 core, TSMC 12nm process manufacturing, integrating 21 billion transistors, with an area of 815 square millimeters, with 5120 CUDA cores, 640 Tensor cores, initially based on SXM2 form factor (300GB/s NVLink bus), and soon added PCIe form (32GB/s PCIe bus).
More than two years later, the status of Tesla V100 is still unshakable, and the latest Tesla V100S goes a step further, with both core and video memory speeding up, but the power consumption has not changed.
Tesla V100S has only one form of PCIe expansion card, with double-precision floating-point performance of 8.2TFlops (teraflops), single-precision floating-point performance of 16.4TFlops, and deep learning performance of 130TFlops. Compared with the PCIe and SXM2 versions of the Tesla V100, respectively Increased by up to 17% and 5%.
In terms of video memory, HBM2 is still used, the capacity is fixed at 32GB instead of 16GB version, the bit width is still 4096-bit, the frequency is accelerated from 1.75GHz to 2.21GHz, and the bandwidth is also increased from 900GB/s to 1134GB/s.
However, while the speed is greatly increased, the power consumption of the Tesla V100S remains at 250W. Obviously, both the manufacturing process and the core architecture are more mature.
Through the introduction of the editor, I wonder if you are full of interest in this computing card? If you want to know more about Tesla V100S, you may try Du Niang for more information.