Brownfield SDN – How to Deploy SDN Switching Within Your Existing Network
During the Pluribus Networks DemoFriday on Brownfield SDN switching, attendees learned how to enjoy the full benefits of Brownfield SDN without being a hyperscale or webscale shop. The Netvisor took centerstage and Pluribus Networks CMO Dave Ginsburg demonstrated how it allows for overlay-free virtualization and interoperability with existing networks. After the presentation, Dave Ginsburg kindly took few questions from the audience. Find them featured in the above video.
Makes sure to check out the Pluribus DemoFriday archive page for more!
How were the op-ex savings calculated?
Pluribus: We looked at traditional box-by-box management and then compared it to efficiencies by managing a fabric from a single point. For analytics, we the looked at the added advantage by eliminating the op-ex of a separate visibility fabric. Details are on our featured article on SDDC deployment.
In large Layer 2 networks, are you using STP to break loops, as you don’t use any overlay?
Pluribus: In a Pluribus-only network the software vport table can be used to detect and break loops without STP.Netvisor does support STP for interoperability with non-Pluribus switches.
What application use cases are offered and/or deployed via REST API?
Pluribus: All Netvisor commands are available via the CLI, the REST API, the Java API, and the C API. Future support will include the SAI. Almost all are also available via the vManage GUI.
You mentioned OpenStack integration, could you explain a bit more on level of integration of Netvisor with OpenStack?
Pluribus: Where an external controller is deployed (i.e., RedHat RHEL OSP), the Pluribus node supports the Neutron ML2 plug-in for networking. It also supports multi-tenancy based on Netvisor VNETs, as well as virtualized appliances via KVM. These can be assigned to individual tenants. Netvisor also ships with an integrated Centos OpenStackcontroller, eliminating the need for a separate server.
How do you detect microbursts in the network caused by congestion/misconfiguration/DDoS?
Pluribus: Netvisor records statistics for ports, VLANs, flows, and TCP connections in rolling log files that can record days, weeks, or months worth of data depending on traffic and how much disk space is available for logging. A microburst in traffic typically shows up in port statistics. These port statistics can then be correlated with VLAN, flow, and transmission control protocol (TCP) connection statistics to identify the reason for the congestion and suggest remediation steps.
Do you support Layer 2 ECMP?
Pluribus: Netvisor supports ECMP routing as well as Active/Active VLAG in clusters.
In addition to CLI & Rest, what traditional northbound interfaces are supported from a vFabric perspective?
How is a Layer 2 link generated logically between two different ports between two different vendor nodes?
Pluribus: Ports can be aggregated manually or using LACP.
Does the additional CPU/memory on the server-switch allow for full BGP routes (as opposed to traditional switches that have a limit typically far less than full routes)?
Pluribus: In theory, yes, but this is not yet implemented in production code.
Can you talk about how NetVisor handles VM migration/movement and lifecycle management? Is your Switch-Server architecture transparent to VM lifecycle issues?
Pluribus: From the application standpoint, it is transparent, but given the embedded analytics, we can track the movement of VMs and set rules (vFlows) accordingly. In the future, we will have closer integration between Netvisor and vSphere.
Are edge router services like MPLS and VPLS supported?
Pluribus: The system is currently positioned as a converged edge platform, but not a classical edge router. Some of the solutions provided by MPLS and VPLS may be solved by the system’s L2/L3/VXLAN support.
Does nvOS support sub-second link protection using BFD over vLAG groups and over VxLAN tunnels?
Pluribus: Netvisor monitors links for failure at the physical layer now. A future release will support BFD.