Stu Bailey of FlowFowarding.org (and also CTO of Infoblox) is keynoting the Erlang User Conference in Stockholm on June 10th and also announcing the availability of LINCX, a vSwitch running a bare-metal Erlang run-time (LING). We had a chance to catch up with Stu last week, before he headed off to Europe. For our readers who haven’t done so, check out our interview with Stu from two years ago, when the LINC project and FlowForwarding.org was created.
To refresh our readers’ memories, can you share a couple of words about FlowForwarding.org and its mission?
Bailey: FlowForwarding.org is a project and a community promoting free, open-source, and commercially friendly Apache 2 license SDN implementations, with many of the projects supporting the ONF OpenFlow standard. It is primarily funded by Infoblox today but all of FlowForwarding’s projects are free for use by the community, even commercial entities.
What’s your updated view of SDN since we last spoke two years ago?
Bailey: I see SDN not as a protocol but as a technology approach, a transformation from hardware-defined networks, with their capex and opex model, to a new and different programmable model. I see SDN encompassing all aspects of the programmable data center, including network virtualization solutions like VMWare’s NSX, PLUMgrid, as well as programmable Ethernet fabrics.
As well, even though we’re talking two years later, I see SDN is still in its infancy; outside of cloud service providers and some telecommunication providers, the market at large is not in a position to consume foundational SDN yet.
And for those of us unfamiliar with LINC and LINCX, can you say a couple of words about them?
Bailey: LINC is a pure OpenFlow software switch written in Erlang. It’s implemented in operating system’s userspace as an Erlang node and has historically been run on Erlang run-time environments on Linux. The recently announced LINCX is a new production-ready open-source software-defined networking switch that is fully programmable. It’s basically LINC running on Erlang on bare metal (LING). No dedicated hardware is required for LINCX. The software can run on any standard off-the-shelf physical or virtual Linux or Xen server, as well as on lost-cost network white-box devices.
Why did you decide to create LINC/LINCX?
Bailey: We wanted to explore the fundamentals of programmable networking — a world where networking is dominated by CPUs rather than by NPUs [network processors] and ASICs. LINCX allowed us to start essentially in a clean room environment and learn how hard it was to create a programmable data plane and networking platform that didn’t have any commercial agenda tied to it.
How does LINCX compare with Open vSwitch (OVS)? How is it similar? And how is it different?
Bailey: I would say that LINCX and OVS are two very different animals. The biggest difference is that OVS was built to be a software-based switch for virtual environments that happens to also support OpenFlow. LINCX’s roots are in a programmable data plane, with no logic or semantics built in to handle traditional networking functions like Layer 2 or Layer 3 capabilities. LINCX supports OpenFlow, though, and allows itself to be programmed to perform basic networking, but it’s not focused on being a Layer 2 switch.
We’ve asked this before, but over the last two years, have you found that the LINC/LINCX path made more sense than directing resources into OVS?
Bailey: I happen to think that one indicator of a healthy SDN market is that there are lots of choices for all the layers in networking. In this case, having multiple choices for software-based data planes (vSwitches) is a good thing. I see that OVS has value in the network virtualization model, but we were after fundamental programmable data planes, and so we felt starting with a clean slate made more sense.
Can and will LINCX replace OVS? For example, will LINCX run in an KVM environment?
Bailey: Today, LINCX runs in an Erlang VM on Linux, or Erlang VM on bare metal. It’s more likely we’ll port to Hyper-V or get it working on VMware, then spend our cycles on porting LINCX into KVM. KVM already has OVS, which is a good virtual switch, and it’s not our intention to move in that direction at this point. I’m not looking to compete head-to-head with OVS.
With the benefit of a few years of experience, has your choice of Erlang impacted the adoption of LINC/LINCX?
Bailey: I still believe Erlang is a good choice because it is ideal for handling distributed communication problems and has lots of built-in technology such as a pattern-matching compiler, dynamic code loading. As well, having the Erlang VM run on Xen provides us with a high-performance OS-less platform on which we can run LINCX. Erlang is well proven in large-scale software deployments like at WhatsApp, which uses Erlang to handle the enormous number of users it sees.
In the end, I don’t think writing in Erlang has affected adoption of LINCX. Certainly it is not as well known and might hurt in recruiting other developers, but we know that the Erlang community has core competency in scaling and distributed systems problems, so there are many qualified contributors out there in the community. To get value from LINCX, you don’t necessarily need to write in Erlang. You can use the API to consume LINCX capabilities, and LINCX supports OpenFlow. Furthermore, I see consumption of this programmable fabric function as being controlled by cloud management platforms such as OpenStack with Neutron or, say, the VMware stack. So LINCX will not even be directly accessed by end-users.
In the long-term, I’d expect to see some vertically integrated application stacks, with an application driving the network functions directly — like, say, Hadoop maximizing performance by handling the programmable data plane. But that’s going to take a while.
What’s the coolest thing about LINC/LINCX that’s unique?
Bailey: I think the basic idea that you can run a fully programmable Ethernet data plane on any machine with two or more ports, with good performance and OpenFlow 1.2-1.4 support, is pretty cool. I have it running at home, on a tiny server, as my gateway, and it’s performed flawlessly!
What’s the state of LINC/LINCX today? Any proofs-of-concept (PoCs) running today?
Bailey: It’s available for everyone to download and try. And it’s powering part of the Infoblox corporate networking today.
Who are the primary contributors to LINC/LINCX? How many organizations/individual developers are participating at this point in time?
Bailey: We had three contract development houses with Erlang expertise help us build it out. There’s also a list of hardware partners on the FlowForwarding.org page. All in all, we spend about $1 million dollars, with about six person-years invested over the last year. We’re certainly looking for more developers to jump in and use what we have and contribute back.
Any lessons learned for other folks looking to participate in the open SDN ecosystem? Any gaps you think need to be filled?
Bailey: My biggest lesson was learning how early this whole SDN phenomenon is. And the lack of understanding in the community about how much opportunity there is for CPU-based forwarding compared to ASIC-/NPU-based forwarding. In the end, I think we ended up jumping in and filling the gap by basically just doing it and showing it could be done. Certainly, there’s plenty left to do above the data plane, and the bulk of the work is in the control plane and above — it’s going to take a lot of players and a lot more time before we replace the traditional style of networking. End of the day, it’s not even about the technology but the structure of the industry today that’s adding friction to SDN deployment.