VMware claims that virtualized machine learning workloads perform “near or better than bare metal performance.”
VMware claims “bare metal” performance on virtualized GPUs.
Its team used the SQuAD dataset to train the natural language processing workload BERT, as well as the COCO dataset to train the image segmentation task Mask R-CNN.
VMware deployed a number of NVIDIA GPUs coupled by the NVIDIA NVLink near-range communications channel for the training workloads.
This solution, will provide users with bare metal server performance while also providing all of VMware’s virtualization-related benefits, such as server consolidation, power savings, virtual machine over-commitment, vMotion, high availability, DRS, central management with vCenter, suspend/resume VMs, and cloning.
Kurkere said he anticipates these results to improve resource utilization in a number of different fields, including “investment banking, pharmaceutical research, 3D CAD, and auto manufacturing”.
VMware’s Uday Kurkure told The Register he expects most high performance computing (HPC) workloads will be virtualized moving forwards, and told the publication that HPC teams are “always running into performance bottlenecks that leaves systems underutilized”.
VMware told The Register it is also investigating how virtualized GPUs perform with even larger AI/ML models, like GPT-3, as well as how these technology be applied to telelcoms workloads running at the edge.
The results were achieved via using Nvidia’s vGPU Manager in vSphere as opposed to the hardware-level partitioning.
Check the latest news about tech news section for best information.