Last week’s OpenStack Grizzly release has a lot of people excited about the new features that are now available. The open source cloud stack that is backed by a burgeoning community has now over 200 new features across all the projects including compute, storage and networking. All of the new networking innovations are found in OpenStack Networking. It has been enhanced with a network services framework and a component management framework to provide high availability and scalability. Additionally, features such as security groups and perhaps the most important, multi node networking support have been implemented to improve security and reliability.
Without multi-node networking all tenant network gateway traffic was forwarded to a single host node which was both a single point of failure as well as performance bottleneck. Multi-node networking using the networking scheduler allows L3/DHCP services to be distributed across several host computers. Other important HA functions are also included and everyone anticipates significantly more deployments using OpenStack Networking with Grizzly.
OpenStack Networking enables more sophisticated tenant networking across compute nodes supporting isolated virtual networks beyond the 4K VLAN limit. Many deployments start off using simple VLAN isolation using OpenStack’s flat networking model, but, even those simple deployments can run into operational challenges.
The problems begin when you need to configure these VLANs on the virtual switches on the compute nodes (hypervisors) as well as in the physical network. OpenStack Networking’s plugin architecture currently supports a single plugin. Most users deploy OpenStack Networking with the Open vSwitch plugin to configure the virtual switches on each host, which also requires the physical switches to be configured manually. This approach is practical for small test deployments running on one or two nodes with a small number of VLANs. However, it quickly becomes unmanageable with dozens of hosts spread across several racks.
Cisco recognized this problem from the start of the OpenStack Networking project more than a year ago and built its OpenStack Networking plugin to overcome this limitation. The Cisco Nexus plugin for OpenStack Networking automatically configures VLANs on multiple virtual and physical switches. The new Cisco Nexus plugin for Grizzly OpenStack Networking also has some additional features to configure aggregation switches and to intelligently trunk these VLANs only where they are needed. You can read more about the new Cisco Nexus plugin for Grizzly OpenStack Networking at here.
The Cisco Nexus plugin for OpenStack Networking is open source and part of the standard Grizzly OpenStack release. This plugin has minimal dependency on other projects, which makes it easier for the plugin to be enhanced over time with new features like exposing even more Nexus features into tenant networks.
This autonomy was a key intent of the OpenStack Networking plugin architecture: Isolate functionality into separate modules. While there are other proposed approaches that modify the popular OVS plugin they have not yet been embraced by the OpenStack Networking community (and may never be), because they introduce upstream dependencies that potentially require a fork to the code and lead to orphan projects, leaving users with unsupported software and without an upgrade path.
The Cisco Nexus plugin simplifies a number of operational challenges with OpenStack deployments summarized below:
|Requirement||Challenge||Cisco Plugin Resolution|
|Extend Tenant VLANs Across Virtualization Hosts||Configure VLANs on both physical and virtual networks. OpenStack Networking supports only a single plugin at a time. The operator must choose which parts of the networks to manually configure.||Accepts OpenStack Networking API calls and configures both physical virtual switches.|
|Efficient use of Scarce VLAN IDs||Static provisioning of VLAN IDs on every switch rapidly consumes all available VLAN IDs limiting scalability and more vulnerable to broadcast storms.||Efficiently uses limited VLAN IDs by provisioning and de-provisioning VLAN’s across switches as tenant networks get created and destroyed.|
|Easily Configure Tenant VLANs in ToR (Top of Rack)switch||Operators need to statically provision all available VLANs on all physical switches. A manual and error-prone process.||Dynamically provisions tenant network specific VLAN’s on switch ports connected to virtualization hosts through Nexus plugin driver.|
|Intelligent assignment of VLAN IDs||Switch ports connected to virtualization hosts are configured to handle all VLAN’s, hitting hardware limits very soon.||Switch ports connected to virtualization hosts are configured only for the VLAN’s corresponding to the networks configured on the host, enabling accurate port: VLAN associations|
|Large, multi-rack deployments require aggregation switch VLAN configuration||Static provisioning of VLAN IDs on every switch rapidly consumes all available VLAN IDs limiting scalability and more vulnerable to broadcast storms.||Efficiently uses limited VLAN IDs by provisioning and de-provisioning VLAN’s across switches as tenant networks get created and destroyed.|
If you’re thinking about deploying OpenStack and use VLAN networking, these are all features that are going to give you a more scalable and easier to deploy and manage environment.
With Grizzly out the door, we now turn our focus to the Havana release. One of the important next steps in making OpenStack Networking meet customer data center deployment requirements is to provide a reliable and scalable L3 (Layer 3) network implementation. As the OpenStack Networking services framework matures and services such as Load Balancer (integrated in Grizzly), Firewall, and VPN become part of OpenStack Networking, a L3 back end implementation that is pluggable and integrates with customer’s physical network devices will become imperative. To that end, at the upcoming Havana design summit in Portland, we will be leading the discussion on how L3 services can be improved in OpenStack Networking with the above design goals. We hope to see you in Portland at the OpenStack summit ! Also, visit us at our booth to see Cisco’s demo for Openstack.