At the Intel Developer Forum yesterday, the company even brought out an executive from Chinese cloud giant Baidu to talk about the Xeon Phi, Intel’s machine learning chip. The choice was interesting considering Baidu has been a vocal Nvidia customer.
The potential ace up Intel’s sleeve is the pending acquisition of Nervana, a deep learning startup reportedly working on a chip of its own. (Deep learning is an advanced form of machine learning that tries to mimic human intuition.) Intel executive vice president Diane Bryant mentioned Nervana during yesterday’s keynote, but with the deal still not closed, it’s understandable that she didn’t articulate Intel’s plans for the startup.
The more immediate news for Intel was the announcement of its latest processor for machine learning. The Xeon Phi is getting an upgrade this year nicknamed Knights Landing. Yesterday, the company announced that the chip’s next iteration, called Knights Mill, will launch in 2017.
Knights Mill improves performance and efficiency by adding to the Intel instruction set and tweaking some of the architecture’s floating-point processing, Bryant said.
GPUs are the primary competition for Xeon Phi and its successors. GPUs were originally created for computer graphics, as the name implies, but their number-crunching abilities have proven useful in machine learning. That’s nice for Nvidia, the GPU powerhouse, but it could be a problem for Intel, whose CPUs might be relegated to ordinary boot-up duties while GPUs handle all the machine-learning fun.
Xeon Phi is meant to deflect that situation. It serves as both boot-up processor and machine learning coprocessor, so that you only need one chip to run a machine-learning server.
Intel’s challenge, then, is to show that the Phi can keep up with, or possibly outdo, GPUs. The company recently issued some benchmarks to that effect, but Nvidia claimed this week that Intel’s testing used outdated GPUs.
In addition to Baidu’s senior vice president Jing Wang, Bryant brought out Slater Victoroff, founder of Indico, a startup using deep learning to analyze text and images. Victoroff envisions this idea leading to specialized machines that build customized models of the world based on each customer’s needs. “It allows you to actually treat each of your customers as an individual,” he said.
But that could mean generating thousands of models, and at that point, “GPUs don’t make sense any more,” he said. He said he prefers the Intel model, where the host processor also runs the deep learning algorithms.