The Deep Learning Enabled Cloud: Google's Play
This morning Google Cloud announced the upcoming availability of powerful, innovative GPU instances. As a an beta tester of the new offering we had the opportunity to take the instances for a spin and test them out against the 1.2 billion row taxi dataset.
We were impressed.
The instances boast per minute pricing flexibility, excellent configurations and an innovative SSD approach that processes cold loads five times faster than other solutions we have seen. The net of this is that we expect Google to provide a compelling option in an increasingly competitive market for this next generation compute platform.
As our founder and CEO noted in the Google blog,
"These new instances of GPUs in the Google Cloud offer extraordinary performance advantages over comparable CPU-based systems and underscore the inflection point we are seeing in computing today. Using standard analytical queries on the 1.2 billion row NYC taxi dataset, we found that a single Google n1-highmem-32 instance with 8 attached K80 dies is on average 85 times faster than Impala running on a cluster of 6 nodes each with 32 vCPUs. Further, the innovative SSD storage configuration via NVME further reduced cold load times by a factor of five. This performance offers tremendous flexibility for enterprises interested in millisecond speed at over billions of rows."
The particulars of the Google instance aside for a moment, the mere presence of them is yet another major milestone in what has been an epic year for GPUs and more specifically for Nvidia.
As a result the engine of the GPU revolution, Nvidia, is literally blowing up.
In the most recent quarter, revenues were up 52%. This isn’t some startup we are talking about, this is a $5B a year run rate company. More importantly datacenter revenues nearly TRIPLED to $240M.
This is an extraordinary environment in which to operate and we are very fortunate to have such a key relationship with these ecosystems players, participating in private betas and product development feedback cycles and being included in launch cycles for IBM, Amazon and now Google.
Further through Nvidia’s investment in us we enjoy an exceptional relationship with the teams developing and deploying some of the fastest compute resources on the planet.
As the revolution grows, this market will get more crowded, we are already hearing about legacy CPU vendors scrambling to bolt their bloatware to this amazing compute platform.
It won’t work.
To get performance like we see, you need to be pure to the vision of GPUs, to have built it from the ground up in order to extract the maximum the hardware offers.
Furthermore, you need to think about the problem differently. The challenges faced by enterprises aren’t just about fast database queries, it is about a fast system. That includes the frontend as well.
Using GPUs to accelerate some functions to claim you are faster than you used to be doesn’t have much meaning if it feeds a frontend that takes minutes to render the outcome.
GPU-class speed has its own rules and they are hard to learn as you go.
To circle back to Google for a moment, what they are putting in place with these instances is a framework to work with massive datasets and the pricing flexibility to make that economical.
This is, as it seems, a major play for the deep learning market - a market that will redefine everything. For a handful of companies, machine intelligence is already becoming a reality. For the vast majority of others, however, analytics represent the first critical step down this trajectory altering path. We are pleased to be part of that journey.
Google's vision is bold, these instances are true to that vision and we expect them to see continued success.