Officially announced this week, Richard Napolitano has taken the reins from former Plexxi CEO David Husak. A member of Plexxi’s board for several years, Napolitano worked alongside Husak to launch Plexxi and recently was convinced to lead the company. In our exclusive interview with the new CEO, Napolitano expands on the future of IT, big data, and Plexxi.
SDxCentral: Let’s start with the obvious, Rich. You have an impressive background, having driven more than $4 billion annually in revenue and led more than 2,000 people at EMC. But how does someone with a strong storage background end up as the CEO of a networking company?
Napolitano: The future of IT isn’t about networking any more than it is about storage. Where we are headed as an industry — and by industry, I mean all infrastructure — is more about the applications than the disaggregated components. How do compute, storage, and networking work in concert to deliver some application experience? We believe these come together as what we and others call the Third Platform. And we believe that this platform is application-centric but oriented around the data that makes next-generation applications powerful.
The nature of this Third Platform is that the most important elements of IT are shifting from infrastructure to applications to data. If that data is vast and distributed, then perhaps the most critical element is the network that connects it all together. I felt like Plexxi had arrived at the correct thesis strategically, had developed the necessary technology building blocks, and had accumulated a critical mass of top talent to execute against the vision. As for the timing, if you listen to customers, it’s obvious that the industry is at peak frustration. Never before has there been so much focus on topics like vendor margins and operations. The noise is because people need a different way of doing things. There is a huge opportunity for companies capable of delivering better in a world where good enough just isn’t.
You are asserting that the rise of data is important to applications and the network. Can you give some concrete examples?
One trend putting this dynamic on display is the rise of big data ecosystems. If you look at companies like Cloudera, they are providing platforms that allow data that is diverse and distributed to be accessed as a single pool of information, called a data lake. The architectural premise is that applications act on this data lake to drive results. The data might be a collection of real-time customer data for retail shops, or it could be sensor information useful in managing industrial infrastructure. Whatever the eventual use case, the two properties that are common are that the data must be unified and available just in time. This means the network that connects data and application resources is critical.
If the customer objective you describe is about the data and the applications, isn’t the real goal to relegate the network to some supporting role or disappear entirely?
Absolutely. Networking is far too dramatic in many environments. In some cases, it’s a point of contention because it’s where everything comes together, in which case the drama is how to manage the different requirements, priorities, and interests given an increasingly overburdened network team. In other instances, the network is so fragile that any change whatsoever threatens to turn into a fireworks show with unknown collateral damage. The whole point of what we are doing around this Third Platform is to make the network scalable and responsive, but to do it through integration hooks that allow the network to be abstracted. The network for a new era has to dramatically simplify the act of networking if infrastructure is going to have any hope of keeping pace with the changes inherent in a dynamic, data-oriented world.
That’s a great vision, but how does that map to actual technologies and solutions?
If the data is vast and distributed, the applications themselves will become more distributed. This means the network that serves them must also be horizontally scaled. And if the applications and data are in near-constant flux, then the network must also be far more adaptable and agile than it is today. So what do you need to do that? First, you need to take care of the physical transport pieces — the interconnect, if you will. Even though everything is software-defined these days, until we invent a way to will packets from A to B, you still need a physical interconnect.
Second, you need some way to intelligently control that physical transport. This is the SDN piece. But SDN in this context is less about architectural separation (which is really a how) and more about intelligent path selection based on data and resource locations. Finally, if the vision is an integrated platform, then the whole thing has to be integrated with surrounding infrastructure. This isn’t just interoperability with other networking equipment but rather integration with provisioning, monitoring, capacity planning, and the likes. But it doesn’t stop there. We need to be able to extend that integration up to the application ecosystem, integrating with platforms like Cloudera, Hortonworks, and the tools and frameworks that surround them like YARN and HDFS.
A lot of this is very forward-looking. Are you worried at all about building a company around a Third Platform that hasn’t yet emerged?
Navigating a migration from current-generation to next-generation architectures is about solving problems in today’s context while simultaneously preparing for the future. You can’t just create a solution for tomorrow, because from a business perspective, there needs to be a compelling reason to make the leap forward. So the real question is whether our current Second Platform solutions are serving us well. If you look at the leading technology companies — web services companies like Facebook and Google — they are creating their own platforms so they can support their horizontally-scaled out applications. Do you think they want to be in the infrastructure business? Certainly not, but the lack of solutions forces them to be.
But it’s not just web services. In the enterprise space, if you want to roll out a big data deployment, you roll out a new network along with it. Why? Because the current networks aren’t designed with new applications in mind. If you look at service provisioning in a data center, it’s too complicated and takes too long. And in multisite data centers, customers have to add a third network just to interconnect two other networks. The whole thing is already unwieldy, and then imagine what happens when the applications evolve.
The vision you have outlined is certainly broader than we have heard before. Does this vision represent a pivot for Plexxi?
Companies go through different stages. For the past few years, Plexxi has been developing its technology theses, and they appear to be playing out in the market. The data center is definitely going to be photonically interconnected, software defined, and automated and integrated. With the technology pillars in place, it’s now time to place those capabilities into a meaningful business context. This isn’t a change in strategy so much as a clarification of market fit. Our customers are telling us that they need a way of solving their current pains, but what keeps them up at night is the thought of keeping pace with a much more fluid application infrastructure. With that in mind, we are focused on building solutions for the agile data center, scale-out applications, and distributed cloud in such a way that today’s issues are resolved and tomorrow’s planned for appropriately.