Which might sound really geeky, but it has deep relevance to networking. In fact, IO Visor could provide a way to build an entire network topology without using any network hardware.
On a more down-to-earth scale, IO Visor could improve the performance of elements such as virtual switches by making it possible to run more of them in the Linux kernel, which has inherently better performance than Linux user space. It’s an issue that came up last year at VMware‘s Open vSwitch Conference, where there was some discussion about whether it’s better to move some packet processing out of the Linux kernel. (Intel, for one, was in favor of it.)
PLUMgrid is a key member here, having contributed the code that’s giving IO Visor its start.
Opening Up the Stack
Compared to OpenStack, which relates to the world at the level of the entire cloud, IO Visor reaches into the depths of Linux, where the kernel acts as a nerve center to connect applications’ input/output requests to the hardware.
“There’s still a lot of code to be written and a lot of innovation to come both within the kernel and on top of the kernel,” says Jim Zemlin, executive director of the Linux Foundation. The vendors involved are “seeking to do this work collectively as opposed to on their own.”
As you can see by the IO Visor presentation slide below, other open networking projects work even further down the stack — particularly the Open Compute Project, which gets into the hardware itself. But IO Visor is arguably the most esoteric of the bunch.
At its heart, IO Visor is addressing the fact that the Linux kernel isn’t virtualized. Because it talks directly to hardware elements such as memory or the CPU, the kernel can provide faster performance than Linux’s user space.
But the fact that it’s not virtualized means the kernel handles one request at a time. To accommodate a new request — a new IO module — you have to recompile the kernel.
IO Visor wants to let virtual machines be added to kernel space with more sponteneity. It has to do with tweaking a part of the kernel called the Berkeley Packet Filter (BPF), which, true to its name, sets up a filtering station that allows certain types of data through.
Here’s a possible effect: in network functions virtualization (NFV), you’d be able to run multiple virtual network functions (VNFs) together in the kernel. That could give you a complete service chain running in the kernel and enjoying the attendant performance benefits.
There’s a security application there as well. Imagine virtual firewalls and intrusion prevention systems being spun up spontaneously in the kernel, without forcing any recompiling.
“It really allows folks to do things real-time instead of start-stop-reboot,” says Lauren Cooney, senior director of software strategy at Cisco.
At a more grand scale, IO Visor could let someone build a complete virtual network spread across multiple compute nodes. The kernel would do all the data-plane processing.
The mechanism for doing these things is called an IO Visor engine. The IO Visor project aims to create these engines, attendant plug-ins, and a variety of developer tools.
If IO Visor succeeds, it will be interesting to see how many tasks eventually move into kernel space. Applications do get better performance there, but nearly all virtual machines are built to run in user space today.
“I’m sure developers will go through the journey of discovery to see what is possible in user space and what is possible in kernel space,” says Wendy Cartee, PLUMgrid vice president of product management and marketing. “I’m sure both spaces will be used.”
How PLUMgrid Met IO Visor
PLUMgrid, since its inception, has been talking to the Linux Foundation about this idea of virtualizing the kernel. It’s an idea based in the limitations the company saw during early development of its virtual networking stack.
With routers and switches, in particular, “there’s frequently a centralized node that you would need to deploy,” Cartee says. “All traffic has to go through that centralized node. It’s not fully distributed.”
The phrase you’ll often see for this is tromboning, representing a long path with a 180-degree turn — the trail a packet must take when shunted from the virtual switch up to that centralized node and back. “The company was really founded to solve this problem,” Cartee says.
In fact, some Linux developers were enticed to join the company specifically because of these ideas and its vision of a programmable data plane, she says.