fbpx
News Hub

Startup unveils ‘AI supercomputer on a chip’

Written by Tue 20 Aug 2019

Cerebras says its Wafer Scale Engine will slash AI training times

San Francisco-based startup Cerebras Systems has unveiled what it claims is the largest processor ever built.

The Wafer Scale Engine (WSE), designed to process AI applications, comes packed with 1.2 trillion transistors and measures 46,225m², making it 57 times the size of Nvidia’s flagship V100 data centre graphics card, which contains 21.1 billion transistors. While Samsung has previously built a flash memory chip with 2 trillion transistors, Cerebras’s chip stands out in that it’s built for processing.

“Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size—such as cross-reticle connectivity, yield, power delivery, and packaging,” said Andrew Feldman, founder and CEO of Cerebras Systems.

The accelerator contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth — properties that Cerebras claims enable the chip to produce answers more quickly, allowing researchers to test more ideas and tap into more data.

While chip manufacturers oft-compete to build the world’s smallest processors, larger-scale chips lend themselves to AI applications, as they reduce bottlenecks that manifest when inputs “loop” during machine learning training. Google, Facebook, OpenAI, Tencent, Baidu, and others have previously argued that the biggest challenge facing AI today is that it takes too long to train models.

Unlike traditional chips that are divided or split from individual silicon wafers, WSE is made from one wafer. This means the chip has more cores with which to perform calculations and more memory closer to the cores, accelerating calculation and communication and thus reducing training time, Cerebras said.

“While AI is used in a general sense, no two data sets or AI tasks are the same. New AI workloads continue to emerge and the data sets continue to grow larger,” said Jim McGregor, principal analyst and founder at TIRIAS Research.

“As AI has evolved, so too have the silicon and platform solutions. The Cerebras WSE is an amazing engineering achievement in semiconductor and platform design that offers the compute, high-performance memory, and bandwidth of a supercomputer in a single wafer-scale solution.”

The achievement would not have been possible without TSMC, a leading semiconductor foundry who manufactured the chip using its advanced 16nm process.

“TSMC has long partnered with the industry innovators and leaders to manufacture advanced processors with leading performance. We are very pleased with the result of our collaboration with Cerebras Systems in manufacturing the Cerebras Wafer Scale Engine, an industry milestone for wafer-scale development,” said JK Wang, TSMC senior vice president of operations.

Written by Tue 20 Aug 2019

Tags:

fabrication machine learning
Send us a correction Send us a news tip




Do NOT follow this link or you will be banned from the site!