Nvidia’s new competitors argue that they can make hardware faster and more efficient at running AI software by designing chips tuned for the purpose from scratch instead of adapting graphics chip technology.
For example, Intel promises to release a chip for deep learning later this year built on top of technology acquired with startup Nervana in 2016 (see “Intel Outside as Other Companies Prosper from Graphics Chips”).
Meanwhile, Google disclosed last summer that it was already using a chip customized for AI, developed in-house, called a Tensor Processing Unit, or TPU.
Several engineers that built Google’s chip have since left the company to form a startup with $10 million in funding called Groq that is building a specialized machine-learning chip. Other startups working on similar projects include Wave Computing, which says it is already letting customers test its hardware.
See the full story here: https://www.technologyreview.com/s/607818/battle-to-provide-chips-for-the-ai-boom-heats-up/?set=607847&utm_source=MIT+Technology+Review&utm_campaign=d6d373ba99-The_Download&utm_medium=email&utm_term=0_997ed6f472-d6d373ba99-153894145