Doug Marschke is an engineering graduate from the University of Michigan and founder of SDN Essentials. He has also authored several books including “SDN: Anatomy of OpenFlow, JUNOS Enterprise Routing” and “JUNOS Enterprise Switching.” Doug currently spends his time working with both service providers and enterprises to optimize their IP networks for better performance, cost and reliability He has been specializing in network automation, cloud, SDN and NFV technologies for the past few years, and helped roll out the ONF’s certification exams.
Konstantin Babenko is a Ph.D. from Ukrainian Cybernetics Institute of National Academy of Science. His academical work is focused on applying Machine Learning and Artificial Intelligence methods to analyze traditional telecommunication data such as networking flows or packets. Over the course of his 20+ years career, Konstantin was responsible for architecting, designing and implementing major OSS/BSS solutions for Tier 1/Tier 2 CSPs and network management solutions for Enterprises.
SDxCentral: We’re about 6 years into some sort of SDN Revolution. How do you see things going and where is there the most progress?
Doug: Time flies when you are having fun! While we certainly have not seen the explosion of SDN technologies as expected, we have seen it affect customers and vendors. Customers are changing the way they architect their networks, and vendors are changing the way they design products. It has been interesting to see how different solutions are presented for solving the same problem, and it brings quite interesting and intellectual debates. The classic Open SDN controller has not really taken off as expected, and we are now finding out that there have to be more specific architectures for different verticals. That is the next place to tackle and try to agree on as an industry.
Where are the biggest challenges?
Doug: The biggest challenge is that naturally we are opposed to change, and everyone is a bit nervous to cross that chasm from old classic network design to design based on SDN and NFV. It was just a few years ago that we removed ourselves from the classic 3 tiered data center to 2 tiers and now often a flat architecture. So these things take time, and the more customers that make the leap across the chasm, the more innovations we will see. So how do we cross this chasm, it starts with education, which is something that SDN Essentials has been doing for the last few years, both technical and business education.
NFV aims for an open platform. Do you have ideas how multi-vendor NFV orchestration can happen?
Doug: Well we did it! We have a multi-vendor orchestration platform called NFVgrid which we will be demo’ing publically for the first time at the Open Daylight Summit in Washington in a few weeks. This has been 2+ years of development in stealth mode. This was a difficult task, as we had to work with each vendor’s solutions intimately which resulted in upstream changes to their own products. At the end of the day, this allowed us to make a better solution for our own product, as well as improve their products. The key for us was to create flexible architectures that could onboard VNFs in an efficient fashion. This required us to build a pretty hefty VNF validation test suite within our own product, which we have shared with our customers as part of our services organization. It was a win-win all around.
Tell us about the value of analytics and how you see it being implemented.
Doug: Analytics has been a key part of networking since its conception, and it has always been a difficult problem to solve. However, the problem was never the theory of analytics but the physical hardware limitations that we had to engineer around. With network virtualization, the underlying hardware has changed, which allows for some interesting packet capture and flow based analytics. It doesn’t stop there, the key is not just the capture of the data, but what you do with the data. We implemented automated policies based on the machine to machine learning and data patterns. In other words, we make it easy to create business and security policies based on the traffic that we are gathering. This opens up a whole new world for both security and traffic engineering. Our chief scientist Konstantin Babenko, Ph.D. has done an amazing job with some of the algorithms to get this done. It wasn’t easy, but I think we really do have something special here.
Konstantin: Non-deterministic logic is one of the key elements and differentiation factors in our approach to network analytics. Data collection on a packet level is another one. Even a few years ago such granular analysis of networking data was not possible, but now with help of SDN and NFV we can capture a very significant percentage of network packets and process them without any impact on network performance. Big Data provides the necessary means for such volume in a fast and efficient manner. We are utilizing this kind of architecture in NFVgrid. By using it we were able to achieve very impressive results in networking data-collection efficiency.
The next step is to analyze the collected data. Such analysis needs to be very intelligent and provide new levels of data aggregation and correlation. In our case – we are focusing on patterns. By analyzing historical data, we are building models that represent the typical behavior of a particular networking object. When such models are created and trained to utilize Machine Learning methods, we can determine which objects are “behaving” outside of their typical patterns. When you see an object acting outside of its usual profile – you need to start looking at it immediately. It typically means a malicious activity or even compromised network security.
In NFVgrid we implemented such self-learning algorithms for a granular network data analysis. A significant advantage of the ML-based approach is that you don’t have to spend time and resources for any manual configuration, the system just learns by itself. And it notifies you when it detects something out of the ordinary, for example, a resource degradation or a security threat.
What kinds of tools will the industry need to validate technology and interoperability?
Doug: This has been another really cool offshoot in our product, NFVgrid. When we were making a product that is truly interoperable we had quite a bit of testing and validation. The first few on-boards were very manual and took months, but as we progressed and learned more about the common elements we were able to create a set of tools to onboard an NVF. These tools vary a bit based on the type of validation: Function testing, Interoperability testing, and Scale testing. To be honest, there still is a bit of manual intervention when a test fails but at least we don’t have to start at ground zero. This is why we decided to wrap our VNF validation into our PS group. You need the experience and expertise to troubleshoot if the validation tests fail. There is no magic silver bullet for that yet, but give us a bit of time, and we will do our best to create one!