Google releases new artificial intelligence chip

Published on
12/12/2019 01:47 PM

Google has released the chip it has designed especially for artificial intelligence tasks.

The Tensor Processing Unit, which was announced in April last year, is capable of AI tasks such as machine learning and deep learning. The TPU is said to be able to process larger volumes of data, which will mean fewer servers would be required for the same workloads, and ultimately fewer new data centres will need to be built. On its blog at the time of unveiling, the company explained that if Google Search users used voice for search “just for three minutes a day and we ran deep neural nets for our speech recognition system on the processing units we were using, we would have had to double the number of Google data centers”.

In its latest blog, announcing the availability of the TPU, Google says that each Cloud TPU is capable of up to 180 teraflops of floating-point performance and 64 GB of high-bandwidth memory onto a single board. Such performance would be helpful to developers of AI systems, machine learning engineers and researchers, enabling them to iterate faster using the TPU, according to Google. The company provided a few example scenarios:

  • Instead of waiting for a job to schedule on a shared compute cluster, you can have interactive, exclusive access to a network-attached Cloud TPU via a Google Compute Engine VM that you control and can customize.
  • Rather than waiting days or weeks to train a business-critical ML model, you can train several variants of the same model overnight on a fleet of Cloud TPUs and deploy the most accurate trained model in production the next day.
  • Using a single Cloud TPU and following this tutorial, you can train ResNet-50 to the expected accuracy on the ImageNet benchmark challenge in less than a day, all for well under $200.

Google says it will integrate a variety of other companies’ chips into “high-performance CPUs”, including Intel Skylake, and Nvidia Tesla V100, alongside its Cloud TPU. The fee for using Cloud TPUs will be billed by the second at a rate of $6.50 per Cloud TPU per hour.

A couple of companies which have been partnering with Google to be among the first users of the TPU are investment firm Two Sigma, and self-driving vehicle developer Lyft. Alfred Spector, CTO, Two Sigma, says:

“We made a decision to focus our deep learning research on the cloud for many reasons, but mostly to gain access to the latest machine learning infrastructure. Google Cloud TPUs are an example of innovative, rapidly evolving technology to support deep learning, and we found that moving TensorFlow workloads to TPUs has boosted our productivity by greatly reducing both the complexity of programming new models and the time required to train them. Using Cloud TPUs instead of clusters of other accelerators has allowed us to focus on building our models without being distracted by the need to manage the complexity of cluster communication patterns.”

Anantha Kancherla, head of software, self-driving level 5, Lyft, says:

“Since working with Google Cloud TPUs, we’ve been extremely impressed with their speed—what could normally take days can now take hours. Deep learning is fast becoming the backbone of the software running self-driving cars. The results get better with more data, and there are major breakthroughs coming in algorithms every week. In this world, Cloud TPUs help us move quickly by incorporating the latest navigation-related data from our fleet of vehicles and the latest algorithmic advances from the research community.”

Level 5 refers to standards outlined by the SAE, the international engineers’ association. Level 0 is for no self-driving capabilities at all; level 5 means the vehicle does not require a human driver – it can drive itself.

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now