Kentik’s Avi Freedman on Bringing Network Monitoring Into the Digital, Virtual Age

Kentik Detect is a modern, application-aware network performance monitoring platform.  Delivered as a big data SaaS, Kentik Detect ingests trillions of data records and turns them into instant technical and business insights in the form of curated dashboards, ad-hoc analyses, anomaly detection, and peering analytics. With no hardware or software installation, Kentik users go from registration to actionable insight in 15 minutes.

In this interview, Avi Freedman, Kentik’s co-founder and CEO, discusses why Kentik’s approach is unique in the market, and gives his vision of where it goes next.

SDxCentral: Why did you start Kentik?

Avi Freedman: Companies have been “monitoring” their networks for decades, and traditionally employ data from their network devices to better understand any availability, security, or performance problems. But networking has changed dramatically in those decades. We now have the internet, which is of course outside of any one network’s control, plus cloud and data center environments, SD-WAN, and virtualized networks and traffic continues to scale exponentially.

In talking with hundreds of operators and enterprises (and because our co-founding team previously ran large networks, including for Akamai and Netflix), we kept hearing that even though networks have gotten more complex, they were stuck with discrete appliances with limited resolution, or never-complete in-house applications — approaches that seemed stuck in the ‘90s. As a result, they couldn’t gain the visibility they needed to solve the real-world problems they encountered daily.

We entered the market because we wanted to help businesses protect their revenue and security in the face of modern scale requirements and increased complexity. To meet that need, we created a big data, SaaS-ready offering that can take trillions of data points and allow operations teams to understand their network environment in real time.

SDxCentral: What is unique about your company?

Kentik is uniquely offering a modern network traffic analytics engine that solves the scale, granularity, data retention, and query performance challenges associated with past approaches.  It’s offered as a service, which eases deployment and maintenance headaches, but also allows us to aggregate the horsepower to store full-resolution network traffic data for months, and make both new and historical data instantly available for granular querying.

Rather than looking only at data from network elements that are the target of many classic analytics tools, i.e., routers and switches, we recognize that the world has changed. We add in data center hypervisor data, SD-WAN traffic data, and context for traffic traversing the internet and cloud environments. The end result is a unified view of data across the entire infrastructure, both physical / owned, as well as outsourced / cloud. All of the data is enriched with the context of your business — your specific apps, your revenue models, and your customers — to make it understandable and actionable. We’re also very flexible; most of our customers are housed in the public cloud, but we also have single-tenant solutions and on-premise deployments.

SDxCentral: Why did you decide to base your company’s technology on NetFlow?

We believe the richness of real-time information about actual traffic patterns captured at scale tells you more about your network than other approaches. The truth is in the traffic, so to speak. NetFlow and other flow variants have two key attributes which make them a perfect data source to leverage. They’re pervasive, now available in nearly every router and switch deployed in modern networks, so you can see the entire infrastructure using the gear that’s already deployed, and it’s a rich data source, with lots of detail about the “who, what, when, where, and how much” of IP network traffic. NetFlow isn’t just about packets anymore either; with modern infrastructure and instrumentation software, NetFlow can also provide full insight into applications, transactions, and performance.

NetFlow has sometimes been overlooked, either because of the difficulties in collecting and processing it at scale, or because the level of detail it provided was not sufficient for some use cases. We’ve been able to solve both of those, using modern distributed computing architectures and by extending the technology to capture much more about application and network performance data across environments. We combine flow data with Layer 7 information, sFlow, IPFIX, Border Gateway Protocol (BGP), geo-location, and business context to create and store augmented flow, which becomes enormously interesting and valuable. Synthetic flows can be an interesting augment to network monitoring and management, but starting with actual traffic is the optimal place to begin, given that anything you see is automatically a validated performance-affecting issue.

SDxCentral: Who are your customers and why did they select Kentik?

We have two large groups of customers: Enterprises that are focused on delivering money-making apps they want to protect and enhance for revenue; and service providers.

App-focused enterprises like Box, Dailymotion, Pandora and Yelp — all customers — want to defend their revenue by understanding performance problems and by optimizing application delivery and operations. Most come from a world of single-purpose appliances or narrow-view open source platforms with low density. They turn to us because they want a wider view, and they want to know not just performance data at the IP address level, but also on the application level and beyond. We provide them the granularity they need to make real-time operational decisions, and the big-picture insight to make core business decisions.

On the service provider side, we have those that provide data center services and hosting, wholesale providers and communications service providers. For these businesses, the network is their business, and being able to understand and protect its availability, all optimized for the cost of running that network, is a key executive mission. We provide a core component for that mission, even at scale across very large infrastructures. Our customers here include DreamHost, GTT, KDDI, Neustar, OpenDNS (now a part of Cisco), and phoenixNAP, among others.

SDxCentral: What platforms are compatible with Kentik and why is this important?

When customers install Kentik, they are almost always integrating it with other tools, databases, OSS and BSS systems, provisioning infrastructure and configuration management. We can take data from a broad range of network devices, including web servers, agents deployed within the cloud, and sensor infrastructure. All of these can send Kentik rich information about applications, users and customers, so we can in turn present network performance and monitoring data within the company’s specific business context.

Integrations also help us add value to the customer. For instance, we have a field-ready DDoS protection integration between Kentik Detect and the A10 Networks Thunder TPS mitigation solution. It means that we can support some exciting functionality, like triggering Thunder TPS’ dynamic mitigation, which escalates suspect traffic through progressively tougher countermeasures to minimize impact on legitimate traffic. We also integrate with Radware’s Attack Mitigation System to enable network data analysis and scalable mitigation for effective DDoS attack protection, on-premise or in the cloud. Together, through these technology partnerships, we provide a far more accurate and powerful defensive solution than legacy tools and technologies developed for the internet of a prior generation.

SDxCentral: What’s next for the company?

Kentik’s major focus for 2018 is to continue to curate and contextualize data, so that non-networking personnel can get more value from our core capabilities. Creating visualizations and integrations that take the data forward for other analytical purposes will be a primary mission. More specifically, this will be aimed at making insights actionable for DevOps and app development teams without requiring them to have knowledge of the underlying infrastructure layers or the network itself.

The other focus for early in 2018 is becoming even more tightly integrated into the marketplaces of leading public cloud providers.

There are big changes on the horizon on a broader level. In the enterprise world, we’ve been impressed with how integrated the network application server and cloud teams have been, and how much they want to share (and are sharing) information. They’re out to save each other time and make applications that run better and faster. To that end, 2017 saw OpenStack adoption coming into its own: Now we’re seeing containers and microservices and the use of Docker in nearly all greenfield app development. Monitoring this however has been dragged down by legacy tools — something that we can solve. This will be an important market in 2018.

In the service provider world, companies want to move away from “bit jail,” into higher-margin services. Their revenue is still largely dependent on how much infrastructure they have. Moving away from that requires the product and technology teams to work together, and it requires them to create services that use big data methodologies and network function virtualization (NFV). Our security and analytics offering covers the physical plant and the data center in an integrated way, including NFV components.

Networks have changed so radically, and they’re right in the heart of everything, and yet too many organizations are still using monitoring tools from 20 years ago. Most of them are ready to move on. We offer something completely different and are happy to bring the market with us.