Intel is pushing further into artificial intelligence (AI) with a series of new products designed to accelerate the impact of AI in cloud and edge computing. The chipmaker says its AI portfolio is on target to generate $3.5 billion in revenue this year.

Intel revealed and demonstrated what it describes as the company’s first purpose-built ASICs for complex deep learning with scale and efficiency gains for cloud and data center customers. The Intel Nervana Neural Network Processors (NPP) — NNP-T1000 for training and NNP-I1000 for inference -- are in production now and being delivered to customers, according to Intel.

The company also shared details about the next version of its Movidius Vision Processing Unit (VPU) for edge media, computer vision, and inference applications, which is slated for availability in the first half of 2020.

“This unique combination of hardware will enable the industry to embrace much larger and more complex AI algorithms, expanding what can be achieved with AI in the cloud and data center, an edge server, or an IoT device,” said Naveen Rao, VP and GM of Intel’s AI product group, in a prepared statement.

“CPUs really are the foundation for AI,” Rao said during a presentation at the Intel AI Summit in San Francisco. “There’s a clear trend where the industry is headed to build ASICs for AI. It’s because the growth of demand is actually outpacing what we can build in some of our other product lines.”

AI Compute Demand Growing

Intel estimates that the AI community’s demand for compute will increase 64 times during the next two years and the company is aiming to bring all of its silicon together to work in concert to solve this problem. “All these capabilities — compute, memory, communication — integrated at the system level will really drive next generation of AI solutions,” he said.

The software for AI is also difficult, relatively new, and not well extracted, Rao said, adding that it’s not a new challenge for the industry. “When these things become ready made tools that are out in the open source it will really democratize AI to a large group of customers that really didn’t have access to it,” he said. “We’re seeing that happen every day, and this is critical to an adoption of a new technology. I would say that happened with CPUs, it happened with networking, it happened with the internet.”

Intel is positioning the Nervana NNP-T1000 as an ASIC that strikes the balance between computing, communication, and memory to allow near-linear and energy-efficient scaling from small clusters up to supercomputers. The Nervana NNP-I1000 is a power- and budget-efficient ASIC that can run intense, multimodal inference at scale using flexible form factors, according to Intel.

The company says both products were developed for the massive AI processing needs of customers like Baidu and Facebook. In September, Intel revealed a series of new field-programmable gate arrays (FPGA) designed to support new connectivity protocols and boost the acceleration of workloads in the cloud.

Intel claims that in early performance testing the new Movidius VPU provides more than four times the raw interference throughput of Nvidia’s similar range system-on-a-chip (SoC) at one-third less power. The chipmaker says it achieves that with a 72 millimeter-squared footprint. The Movidius VPU will also be incorporated into Intel’s Dev Cloud for the Edge, which launched this week and allows customers to test algorithms on any Intel hardware before purchase.

“This next wave of AI requires huge increases in data and model complexity, some with trillions of potential parameters,” Rao said in a statement. “Training these cutting-edge algorithms requires demand for AI compute to double about every 3.5 months, which cannot be accomplished efficiently with today’s architectures. These AI breakthroughs require new architectures that are specifically designed for high-speed, mass-scale AI compute.”