Tom Nolle has done a great job of explaining that Oracle’s announced acquisition of Xsigo is an Infiniband deal and that Xigo’s version of virtual networking has nothing to do with the kind of network virtualization that Nicira and clones are doing, nor does it have anything to do with SDN, OpenFlow or any other hot buzzwords in networking (http://blog.cimicorp.com/?p=861).
What this is about is a bet by Oracle that Infiniband is going to be a good foundation for building data center networks to support virtualized workloads and/or whatever cloud is (Larry Ellison has repeatedly demonstrated a personal Infiniband fetish in public appearances in recent years). It’s no secret that the exa-whatever series of high performance DC boxes from Oracle use Infiniband as the fabric within the cabinet. In the short to medium term one could make a good case that the best and most “virtual data center” ready fabrics are based on infiniband. To put it simply, Infiniband has better I/O integration that Ethernet. This is because Infiniband includes deep support for the critical data center storage protocols that are essential to decent IO performance in distributed computing and hardware offload of critical protocol and payload processing functions that otherwise become a drain on server CPUs and add big time latency. Some day the emerging Ethernet fabrics (QFabric, FabricPath, VCS, TRILL, SPB) may deliver these capabilities (RDMA anyone?) and more. If that were to happen, the conventional wisdom is that Ethernet, which benefits from much higher production volumes (orders of magnitude) and a much larger R&D budget across the ecosystem, will ultimately eclipse Infiniband, killing this technology.
The wildcard is how long it takes all of this to play out. The performance and scale/density of the enterprise and cloud provider data centers requires solutions that can deliver loss-less storage networking, differentiated QoS, and whole-fabric management models now. If infiniband has that capability today (I don’t have personal experience to say that it does or doesn’t) during the 10Gbs Server and switch refresh that is ocurring now and will be huge for the next few years, there will be a large number of cases where Infiniband will be technically the best solution. Less clear is whether that Infiniband solution will be economically competitive and justifiable in IT budgets. Network virtualization, in the encapsulation/overlay form, could be supported by the Ethernet interfaces exposed by Infiniband fabrics.
The real challenges here for Oracle will be positioning and building credibility as a supplier of cloud-scale computing solutions. Oracle clearly has designs on this space, but it’s not clear that performance to date and existing products are getting them to the “hockey stick”. System and fabric-wide networking capabilities based on tight integration of Infiniband adapters into server designs could be a powerful competitive weapon for Oracle and that’s what this acquisition offers in the best case. I think that, like everything else in networking, Infiniband will get sucked into the Ethernet vortex and the volume/cost economics will kill it. Oracle has bet they can win a race with that macro-trend and I think it looks like a pretty good bet for the next few years. However, I’m still professionally focused on pushing the rate of introduction for a sufficient Ethernet fabric solution and am comfortable betting on the growth rate of that network industry segment for the next few years.