This site may earn affiliate commissions from the links on this page. Terms of use.

Roughly seven months agone, Nvidia launched the Tesla V100, a $x,000 Volta GV100 GPU for the supercomputing and HPC markets. This massive bill of fare was intended for specialized markets thanks to its enormous die size (815 mm sq) and massive transistor count (21.1B). In return, it offered specialized tensor cores, 16GB of HBM2, and theoretical functioning in certain workloads far higher up annihilation Nvidia had shipped before.

Today, at the Conference on Neural Information Processing Systems (NIPS), Jen-Hsun surprise-launched the same GV100 compages in a traditional GPU class factor. Just as the GTX 1080 Ti is a trimmed-down version of the Nvidia Titan Xp, this new Titan V slims downwards in some spots compared with the total-fat Tesla V100. Memory clocks are very slightly lower (one.7Gbps transfer rate, down from one.75Gbps), and the GPU has three retention paths at iii,072 bits, rather than the 4,096-chip interface the Tesla V100 offers. It also offers only 12GB of HBM2, rather than the 16GB on the Tesla V100.

Nvidia is trumpeting the Titan V equally offering 110 TFLOPS of horsepower, "9x that of its predecessor." Nosotros don't doubt that's literally true, but it'southward not a comparing to the unmarried-precision or double-precision math we've typically referenced when discussing GPU FLOPS performance. Information technology'due south a reference to Volta's operation improvement in deep learning tasks over Pascal, and it' s derived by comparison Volta's tensor operation (with its specialized tensor cores) against Pascal'due south 32-scrap single-precision throughput. That doesn't mean the comparing is invalid, since Volta has specialized tensor cores for training neural networks, and Pascal doesn't, but it's a footling like comparing AES encryption performance on a CPU with specialized hardware for that workload with another CPU that lacks it. Is the comparison fair? Absolutely. But information technology'southward fair only for the specific metric beingness measured, as opposed to being a generalizable test instance for the rate of improvement 1 CPU offers over the other.

Nvidia's stated goal with the Titan V is to offer researchers who don't accept access to supercomputers or large fe HPC installations the same access to cutting-edge hardware performance that their compatriots savor. While the GPU is priced at an eye-popping $3,000 (relative to the regular PC market), that's not very much compared with the typical cost of an HPC server.

"Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links," said Nvidia CEO Jen-Hsun Huang. "With TITAN V, nosotros are putting Volta into the hands of researchers and scientists all over the world. I can't wait to see their breakthrough discoveries."

You tin can buy a Titan 5 at the Nvidia store right now, but nosotros can't honestly say we'd recommend ane for anyone not working in these fields. Despite the "Titan" brand having originally debuted every bit a high-end consumer card with some specialized scientific compute capabilities, this GPU family unit has been moving back towards its scientific computing research roots for a number of years. While Nvidia will obviously support the GPU with a unified commuter model, I wouldn't hold my jiff waiting for fine-tuned gaming support from a GPU family that and then few customers will ever have access to.