World largest tech Giant Company Google, today has now made a new graphics accelerator which is available in its public cloud to provide the better support for the AI technology and virtual desktop workloads.
The chip is in cross-questioning with the market leader Nvidia Corp. As it brings a large number of Nvidia graphics processing units that the Google cloud platform supports to four, all of which have been since the year 2017. The pace at which the company is expanding and growing its GPU lineup reflects just how fast enterprise AI adoption is growing.
With the starting cost of a 60 cents per hour, the P4 is the second most affordable GPU among the four which are available. The chip can provide the 5.5 teraflops of performance at the time of production which when at the time of processing single processing values that can take up 4 bytes. A single terabyte is equivalent to 1 trillion floating point operations per second which is the standard unit of computer power.
Nvidia is fully equipped with the P4 with the 8 gigabytes of GDDR5 memory which is mainly designed for the use in GPUs. On-chip memory is faster than that of the regular standalone kind because it keeps data closer to the GPU cores and cuts the latency.
In AI deployments, Google sees its new cloud-based P4s being used mainly for machine learning inference. That’s the data processing neural networks do when they’re in production after they have been properly trained, which is an entirely different task that’s sometimes better served by more powerful GPUs.
Nvidia is a key partner for Google’s efforts to address the growing role of GPUs in companies’ technology strategies. With that said, Google is not fully reliant on the chipmaker for AI processors. The company also offers cloud customers its Tensor Processing Units, internally designed chips customized for running neural networks that can each provide a massive 180 teraflops of computing power.
You may also like to read: