Thanks to everyone who joined us for the July 25 DemoFriday! ConteXtream gave an insightful, practical demonstration of ContexNet, its software-defined networking (SDN) fabric for NFV that provides interconnectivity with subscriber-level granularity and visibility to help carriers virtualize the Gi-LAN network.
After the demo, presenter Ron Sidi, ConteXtream chief architect, took questions from DemoFriday participants. Read that Q&A below, watch the full presentation, or check out the teaser video and other resources below.
What is subscriber awareness? Why is it needed?
ConteXtream: Subscriber awareness is the ability of an SDN solution to associate traffic with a customer while making appropriate traffic-related decisions. “Customer” can be:
- Network (wholesale or roaming)
- Enterprise (e.g. MPLS VPN)
- Residence (e.g. Broadband Access or IPTV)
- Endpoint (e.g. Mobile)
Today, subscriber awareness in the network is distributed and hard to access for purposes like visibility isolation, new services, etc. These are the problems subscriber awareness in the SDN fabric will solve.
What is the mapping layer in your architecture and what is the advantage of having that?
ConteXtream: The ContexNet architecture is an extended implementation of IETF NVO3 architecture. In that architecture, a Network Virtualization Authority (NVA) is used to provide address mapping and other information to Network Virtualization Edges (NVEs). The ContexNet mapping system is this NVA component of the architecture (LISP is an implementation of NVO3, and within LISP the mapping system is taking the role of the NVA in the NVO3 architecture). It is a distributed database entity that helps networking in the virtual environment. It keeps track of VM location, independent of how routing/forwarding needs to happen. The layer makes it possible to map flows to a VNF at a desired granularity. This information is also globally available, which makes the solution highly scalable.
Are the chaining decisions made per subscriber and is it possible for changes to be made to the SFP as the flow progresses in the chain?
ConteXtream: Chaining decisions are made in any granularity that is necessary to apply to the requested forwarding chain. When implementing a Gi-LAN service chaining solution, the granularity would be of each subscriber’s service. The decision is then translated to the appropriate rules that are applied to every hop in the network. This can be done when the subscriber first registers to the network (in which case all decisions are taken and rules are applied for all hops). It is also possible to define certain hops as lazy, meaning the specific decision will only be taken if traffic that would require it is traversing the network. (For example: a video optimizer may not be selected, and rules to correctly forward traffic to the video optimizer will not be applied until the subscriber consumes online video.)
Is it possible to scale a function to telco-grade by having lots of instances? What are the advantages of this approach?
ConteXtream: Yes, it is absolutely possible to scale a function to become telco-grade because the controller uses the mapping layer to control as many instances as required. Every rack may have a controller, and they all access the global knowledge while making local decision therefore the consistency in the network is maintained.
Do you guys partner with third parties for VNFs, or do you provide your own VNFs?
ConteXtream: Yes, we partner with third parties for best-of-breed VNFs, and also, we can work with any VNF that a service provider wants to use, because of the standard interfaces. We showed some third-party VNFs in the demonstration, and in the ETSI POC No. 15 we have some other third-party VNFs also. It is also possible to use open source components for these VNFs and scale them to many-many instances. The controller provides built-in load balancing and maintains affinity in the chaining process on a per-flow basis.
Is your architecture dependent on one orchestration and one hypervisor?
ConteXtream: No, our architecture is not dependent on one orchestrator or one hypervisor. The architecture is intentionally divided to three distinct layers. The topmost mapping layer is completely agnostic of any underlying implementation.
The control layer may have support for specific hypervisors, which may allow taking advantage of specific hypervisor features, since all control nodes are federated by the mapping layer; it is possible to have interoperability between different hypervisors by keeping the hypervisor-specific behavior local to the relevant node.
The forwarding layer can be implemented in an agnostic way by running as a VM; it can be tightly coupled with the virtualization infrastructure and run as the vSwitch on the hypervisor. It may leverage physical programmable resources (either dedicated compute servers or programmable switches deployed at the top-of-rack) or may be a combination of all three options. Like the control layer, different nodes may have different deployments models, with the local control plane correctly managing the resources within its domain. We have a broker layer that allows ContexNet to support multivendor environments.
How do you monitor fault and performance on this architecture?
ConteXtream: The data model stored in the mapping service includes health and performance metrics of the different functions and other resources connected to the network. It is possible to update those metrics through standard APIs exposed by the mapping service. The source of such an update may be the function itself (the VM or a VNF manager on behalf of all the VMs it is managing).
On top of this, health monitors following the same network path real subscriber traffic would traverse. Performing application level operations to monitor true service availability will also update the mapping system with true availability of respective resources making health metrics more accurate and robust.
When troubleshooting problems, how do you diagnose whether the problem is in the overlay or the underlay network?
ConteXtream: There is a clear separation between underlay and overlay. When a problem is in the underlay, it will be detected at the underlay level. For example: OAM messages between two forwarding elements will detect failure in the underlay. Such a failure may (or may not, in case of multipath underlay resiliency) affect the overlay, which will in turn be detected as overlay problem. Correlation of such events would be quite simple, since there are only two layers (underlay and overlay), with clear mapping between the two.
Why is there no PCRF shown in the demo?
ConteXtream: In the demo, the PGW is simulated and is generating AAA Radius messages directly. In an actual deployment, this integration would be through a PCRF (either SD or Gx interface), where subscriber identification as well as specific policy information can be acquired.
So in this demo, the subscriber registers into a portal and notifies of what he is doing. But what about finding the data subscription via the PCRF or AAA as in your first slides?
ConteXtream: Refer to the previous answer. In the demo, the portal is only updating the PGW with the subscriber policy information. The PGW will then generate a Radius Accounting Start message. The portal is not integrated with the mapping system.
It is important to note that the specific element (Radius server, PCRF, GW) with which the integration is performed is not important. The key point is that there is an integration point allowing populating subscriber and policy information into the mapping system. Moreover, many such AAA brokers may exist, all connected concurrently to the mapping system, allowing subscribers and policy to be imported from different sources (for example: different access networks or different sites) using different protocol/integration methods. As long as the information becomes available through the mapping system, it can be used throughout the network.
What type of role does the PCRF have in determining the service chain?
ConteXtream: This, too, depends on the specific integration and deployment requirements. Any policy may be defined by the PCRF (using standard or vendor-specific Attribute Value Pairs (AVP)) as long as the relevant broker can register the information in the mapping system.
For example: a chain profile name may be defined by the PCRF, which results in specific service chains, and in turn, the specific instances the traffic should traverse would be selected. More information that may pinpoint specific policies to apply may also be added, such as service level affecting selection of resources, geographic location, or group membership (enterprise-mobile-private-network). This may also affect such assignments, and this may also be considered multitenancy.