You'd expect Intel to take a lot of interest in the Open Compute Project, since its approach of creating open source building blocks relies heavily on off-the-shelf hardware.

So it's no surprise to see Intel discuss a swarm of technologies at the OCP Summit, which began today in San Jose, Calif.

Rather than cover everything from Intel's keynote, I thought I'd focus on a few items that really caught my eye — with an emphasis on a couple of forward-looking topics, namely, the Altera acquisition and the long-awaited rise of silicon photonics.

But let's start with plain old CPU cores.

Cores Get Bigger and Tougher

Jason Waxman, general manager of Intel's Cloud Platforms Group, briefly showed off a system-on-chip (SoC) called Broadwell, sporting 16 CPU cores. Not teeny Atom cores, but Xeon D cores with 10-Gb/s Ethernet capability.

It's a single-socket design, as opposed to the popular two-socket servers. The aim is to let an operator crowd more of these servers into a small space, a requirement that arose through discussions with Facebook. "Some of their workloads would actually prefer a more dense type of form factor," Waxman said.

Waxman didn't say it, but Broadwell gives Intel more ammunition against the rise of ARM-based data center processors. Several chip vendors are coming to market with processors designed around ARM's 64-bit architecture. Examples include AMD, AppliedMicro, Broadcom, Cavium, and Freescale.

Altera FPGAs Get Put to Use

During the first quarter, Intel will begin shipping a multichip package combining the Broadwell chip with an Altera Arria 10 FPGA, Waxman said. (If he meant "first quarter" literally, then shipping should commence in a couple weeks.)

This kind of combination was the motivation for Intel spending more than $16 billion on Altera. The FPGA piece would house accelerators to offload work from the CPU. That the FPGA is programmable is important, because the accelerators in question are often customized or relatively new code, Waxman said. That means the programming might need to change, something an FPGA can accommodate.

The package Waxman showed doesn't tightly integrate the companies’ technologies; it's more like putting the chips side-by-side. But he noted that Intel has broader plans for FPGAs — enhancing motherboard specs to make sure they're FPGA-friendly, for example.

Intel also wants to make FPGA libraries available openly. These libraries would contain the code for programming particular algorithms into the chips. Doing so requires work in RTL, a language that's unique to the chip world and isn't necessarily familiar to software coders.

Intel plans to make boards and software for FPGAs available later this year, Waxman said.

Silicon Photonics, at Last

I wrote about silicon photonics for years, every story bracketed with the caveat that silicon photonics wouldn't be commercial for a long time. A "long time" has passed. Silicon photonics are coming true.

They're not quite a prime-time attraction yet. "We do need to drive down the cost and increase the bandwidth," Waxman said.

Still, the next-generation data center seems destined to run on silicon photonics. Conventional photonics are made of non-silicon materials such as indium phosphide. Moving those devices onto silicon makes them cheaper to manufacture and easier to integrate into ordinary chips such as CPUs. Intel pictures silicon photonics eventually residing directly on cards or motherboards, a contrast to the bolted-on connectors currently required in servers.

Vendors have also been touting silicon photonics as a power saver compared with regular photonics.

So, in cases where density and power consumption are paramount — a situation hyperscale players find themselves in — silicon photonics are going to be vital, Waxman said.

"We want to make sure the industry is ready to adopt silicon photonics," he said. To that end, Intel's silicon photonics team has been working under the auspices of standards groups such as CLR4 and has been collaborating with partners, particularly Arista Networks.