Who’s going to come out on top: tech industry giants or VC funded startups?
As the world embraces artificial intelligence (AI), semiconductor developers are scrambling to create chips that have the computational strength necessary to take AI from the current limits of its potential to the pinnacle of practicality.
The incredible inroads researchers are making in sensor development, robotics, and sophisticated process automation courtesy of AI are destined to keep driving society forward. What drives AI is data; according to EMC, by 2020 the data we create and copy every year will reach 44 trillion gigabytes, or if you prefer, 44 zettabytes.
Such an astounding amount of data requires an equally mind-bending amount of computational power to leverage, particularly when it comes to teaching computers to think more deeply than ever before. AI chips enable computer software to unearth patterns and make inferences from these massive amounts of data, also known as deep learning.
Very simply put, deep learning is a type of machine learning that enables computers to learn by example.
According to the Institute of Electrical and Electronics Engineers, “deep learning involves the building of learning machines from five or more layers of artificial neural networks.” A neural network is a complex mathematical system that learns tasks by analyzing vast amounts of data; “deep” refers only to the depth of the layers in the network.
Deep learning appears to mimic human learning because it recognizes images, objects, and speech in real time, but it can’t actually replicate human thought. For reference, the human brain has one hundred billion neurons, each of which in turn has an average of 7,000 synaptic connections to other neurons. Recent advancements in deep learning have resulted in significantly faster and more effective ways to train the neural software networks that make AI possible.
Once networks are trained, they can make inferences from what they’ve learned. Training and inferencing or execution require distinctly different technologies. Making inferences on large blocks of data require a number of mathematical calculations to run in parallel to one another. Those calculations take time, and can be sped up by using graphical processor unit (GPU) chips.
GPUs have been around since the 1970s, and we knew them first as the chips that make gaming more realistic. Today, computer buyers look as closely at a model’s GPU power as they do its memory capabilities and computer processing ability (CPU).
According to Tractica, the deep learning chip market is expected to soar from its 2016 valuation of $513 million to $21.2 billion in 2025. So whose slender sliver of silicon will cash in to this new age of high-powered, intelligent computing: established tech giants or the startup? Let’s take a look.
The Incumbents
Nvidia
While Nvidia claims to have invented GPUs in the late 1990s, in reality they only created the moniker “graphics processing unit.” The components that make up a GPU chip had been in existence for decades prior to Nvidia coining the descriptor.
We’ll let that one pass, particularly because Nvidia got the jump on the rest of the field in the early days. Recognizing that GPUs were ideal for parallel processing, the company developed its CUDA programming platform and API to support them. Developers found that the same technology used to deliver a stunning 3D video display had a massive amount of unexplored potential and used it, in part, to craft the technology to power deep learning.
That breakthrough put NVIDIA in the catbird seat, launching them well ahead of Intel, AMD, Microsoft, and other legacy firms. The behemoth’s 2017 Q3 revenue is a stout $2.3 billion.
Intel
Intel has long suffered the slings and arrows of its misfortunes, but some clever and pricey acquisitions combined with robust investments in AI technologies will keep them very nicely positioned in the fray. As PC chip sales continue to decline, Intel reportedly derives 46 percent of its revenue from the sale of other technologies including AI chips. In March of this year, Intel established its AI group, pegging former Nervana Systems CEO Naveen Rao to lead.
Acquired by Intel in 2016 for $350 million, Nervana created Neural Network Processing (NNP) technology, neuormorphic chips that act like brain neurons and are designed for training deep learning neural networks. A year into the acquisition, Intel claims that its groundbreaking self-learning Loihi AI chip is extremely energy efficient and will accelerate the entire AI field.
Nervana was only one of three significant AI-related M&A deals executed by Intel last year: Movidius, a chipmaker whose low-power computer vision chips now power drone developer DJI’s Spark mini-drone, and the assisted driving supplier Mobileye were also picked up by the computing giant. Intel’s layout for AI chip technology to date: a cool $1 billion.
Google’s switch from a Mobile First to an AI First approach, announced at this year’s Google I/O developers conference, means that the company is reevaluating every product they create to “apply machine learning and AI to solve user problems,” said CEO Sundar Pichai.
In 2016, Google introduced its Tensor Processing Unit (TPU), the company’s first custom accelerator application-specific integrated circuit (ASIC) for machine learning. Designed for use with TensorFlow, Google’s open source machine learning software framework that has become a major platform for building AI software, Google’s current crop of second-generation Cloud TPUs will enable Google’s servers to perform training and inference at the same time.
That’s a huge advancement in speed and capability that will enable researchers to experiment with AI more deeply than ever before. TPU 2.0 is available through a dedicated cloud service, which the company hopes will become a powerhouse revenue center.
According to experts at the Google Brain AI lab, the TPU 2.0 chip can train neural networks several times faster than existing processors, in some cases from a day to just hours. Extensive details about the chip technology underpinning this shift—and precise revenue details—are intensely guarded, as you’d expect.
Wave Computing
Campbell, California startup Wave Computing’s dataflow architecture is the technology supporting its machine learning compute appliance. According to the company’s website, the appliance “eliminates the traditional CPU/GPU co-processor structure and associated performance and scalability bottlenecks” to speed up neural network training.
Industry sources describe Wave’s chip as a type of FPGA/ASIC hybrid that can deliver up to 1,000 times the performance for training compared with other methods. In March of this year, Wave secured over $43 million in a Series C funding round for a total of over $60 million in its seven-year history, but there’s no word on how much revenue the dataflow system is expected to garner.
IBM
If you think the legendary chip developer’s “uniform” of button-down shirts and bow ties would telegraph residence in a time before AI, you’re sadly mistaken. Developed in 2014, IBM’s TrueNorth neuromorphic chip is poised to move IBM to the forefront of “brain inspired” chip technology.
In an article written on IBM’s website, Dharmendra S. Modha, the company’s chief AI chip scientist, stated, “TrueNorth chips can be seamlessly tiled to create vast, scalable neuromorphic systems. In fact, we have already built systems with 16 million neurons and 4 billion synapses. Our sights are now set high on the ambitious goal of integrating 4,096 chips in a single rack with 4 billion neurons and 1 trillion synapses while consuming ~4kW of power.”
In September of this year the company announced at 10-year, $240 million partnership with MIT to create the MIT-IBM Watson AI Laboratory.
Microsoft
Here’s how Wired’s Tom Simovite describes Microsoft’s current activity as a chipmaker: “Microsoft has spent several years making its cloud more efficient at deep learning using so-called field-programmable gate arrays (FPGA), a kind of chip that can be reconfigured after it’s manufactured to make a particular piece of software or algorithm run faster. It plans to offer those to cloud customers next year.”
AMD
Old guard stalwart Advanced Micro Devices has survived perilous days to emerge as a serious AI chip player. The legacy semiconductor developer joined forces with Tesla to develop a custom AI chip for their autonomous driving system, and is working on CPUs to compete with Intel and on GPUs to launch an assault on Nvidia.
The Disruptors: By the Numbers
Adapteva
Founded: 2008
Where: Lexington, MA
Funding secured: $5.1 million
Lead investors: Ericsson, Viola Ventures
Chip: “the world’s most energy efficient multicore microprocessor architecture,” vastly increasing the number of cores that can be integrated on a single chip.
Cerebras Systems
Founded: 2016
Where: Los Altos, CA
Funding secured: $52 million
Lead investors: Undisclosed
Chip: “Cerebras Systems is building a product that will fundamentally shift the compute landscape, accelerate intelligence, and evolve the nature of work,” and that’s all pretty much anybody is saying about this lavishly funded enterprise.
Graphcore
Founded: 2016
Where: Bristol, England
Funding secured: $110 million
Lead investors: Sequoia Capital, Atomico, Robert Bosch Venture Capital
Chip: a superfast “Intelligent Processing Unit” designed for machine learning that the company says “let’s recent success in deep learning evolve rapidly towards useful, general artificial intelligence.”
Koniku
Founded: 2014
Where: Newark, CA
Funding secured: $1.65 million
Lead investors: Oriza Ventures, IndieBio, SOSV
Chip: a highly sensitive neurochip with actual biological neurons that the company hopes will one day house millions of neurons per chip.
Mythic
Founded: 2012
Where: Austin, TX
Funding secured: $16.2 million
Lead investors: Draper Fisher Jurvetson, Shahin Farshchi
Chip: GPU computational abilities with neural networks merged on a “button-sized” chip with a 50-time higher battery life and increased data processing capabilities.
Who Wins the Battle?
Will this be the “upstart undoes the incumbents” story? As it stands now, that’s not very likely. The reason is fairly simple: developing and refining AI chip technology is an incredibly expensive and time-intensive process. In this arena, the combination of deep knowledge and deep pockets rules the game.
Keep in mind that developing the right infrastructure to support these advancements will take years and will impact whatever developments we see, chip-wise. Where do those infrastructure developments stand?
TBD.
And that, in essence, is one significant issue to consider: despite the benefits some companies are realizing in terms of process improvement, robotics, and workflow management with AI, and the very real results early investors are seeing, we are witnessing the break of dawn on a big, confounding battlefield.
We have a long, long way to go.