A Powerful New AI Chip and Supercomputer is Revealed by Google

The new AI chip and a cloud-based machine-learning supercomputer will help Google establish itself as an AI-focused hardware maker.

A Powerful New AI Chip and Supercomputer is Revealed by Google

At Google’s annual developer conference Sundar Pichai the CEO announced a new computer processor designed to perform the type of artificial intelligence that took  the industry by storm.

The announcement reflected how quickly machine learning is changing Google itself. This transformation is the sign that company is planning to run the progression of software and hardware’s each applicable feature.

Especially for those working with artificial intelligence the new processor ‘Cloud Tensor Processing Unit’ accomplishes at an incredible speed and it can also be trained quiet effectively. The unit was named after Google’s TensorFlow artificial learning framework.

 

Training is essential for artificial learning like in creating an algorithm that could recognize hot dogs in images user would have to store thousands of example images of hot dogs and not hot dogs until the difference is recognizable. But such large calculations are so complex that training might takes from days to weeks.

Pichai announced the making of artificial intelligence  supercomputer, or Cloud TPU pods, depending upon sets of Cloud TPUs wired together with high-speed data connections. He also told that Google was creating the TensorFlow Research Cloud, having thousands of TPUs attainable over the Internet.

Pichai said “We are building what we think of as AI-first data centers. Cloud TPUs are optimized for both training and inference”

Google will create  1,000 Cloud TPU systems available to machine learning researchers who have intentions of sharing details of their work.

Pichai also announced numerous AI research initiatives , including the efforts of  making algorithms that will help in learning how to do time consuming work involved with making improvements to other algorithms. Also Google is developing Al tools for  medical image analysis, genomic analysis, and molecule discovery.

Google’s work on AI-focused hardware and cloud services is an efforts to accelerate its own operations. Google itself uses TensorFlow to power search, speech recognition, translation, and image processing. Also the Go-playing program, AlphaGo, created by another Alphabet subsidiary, DeepMind uses it.

Strategically speaking Google can dominate the world of artificial intelligence. Still companies like Nvidia making graphic processing chips are becoming prominent with different products.

To show the performance boost of Google’s cloud Google says that training for full day on 32 of best GPUs could be done just in an afternoon using 8 of TPUs pods.

FEI- fei Li chief scientist at google cloud and director of stanfords Al lab said “These TPUs deliver a staggering 128 teraflops, and are built for just the kind of number crunching that drives machine learning today,”.

A teraflop is a trillion “floating point” operations per second. Oppositely, iPhone 6 is capable of about one billion floating point operations per second.

Google said that researchers will still be able to design algorithms using other hardware before they shift it to TensorFlow, this is to protect freedom of design.

An increasing population of researchers have started using  TensorFlow after Google released the software in 2015. Google is confident that it is now the mostly used deep- learning framework in world.

To comment on this article and other 01Technology Magazine content, visit our Facebook page or our Twitter feed.

Leave a Reply

Your email address will not be published. Required fields are marked *