SDxCentral CEO Matt Palmer speaks with RunWhen's Kyle Forster about a better way for companies to support gigantic code bases.
What’s Next is a biweekly conversation between SDxCentral CEO Matt Palmer and a senior-level executive from the technology industry. In each video, Matt has an informal but in-depth video chat with a fellow thought leader to uncover what the future holds for the enterprise IT and telecom markets — the hook is each guest is a long-term acquaintance of Matt’s, so expect a lively conversation.
This time out, Palmer spoke with Kyle Forster founder and CEO of RunWhen. Forster has worked for some of the biggest names in the tech world, including Microsoft, Cisco and Google. He was also the founder of Big Switch Networks, a pioneer in software-defined networking (SDN), acquired by Arista Networks in 2020. In 2021 he founded RunWhen, an expert SRE/DevOps community that contributes operational commands and artificial intelligence (AI) digital assistants that help app developers and new platform engineers figure out which to run, and when.
Editor’s note: The following is a summary of what Palmer and Forster discussed in their conversation, edited for length. To hear the full conversation, be sure to watch the video.
Matt Palmer: You talked a little bit about the problems enterprises have that you guys are working on that you saw as a reason to start a company. Let's talk a little bit about the customer and the problem.
Kyle Forster: The thing that I saw over and over again was, does everybody have to be an expert in everything? You have these complex tech stacks with all the cloud infrastructure, Kubernetes, all the microservices that generally people will be running on top of them, plus one thing that I saw was the gigantic and very rapid adoption of open source. I just saw these applications that would come together. The development speed was stunning. But as soon as the application moved into production, something could go wrong.
At the time that I left Google in 2021, on average, a medium-sized Kubernetes cluster was already running 40 major open-source operators. That's a lot of microservices. Usually about 30 to 40% of those would just be straight-up open-source packages that were then turned into internal services. So suddenly to support that somebody needs to know these multiple gigantic code bases, in addition to the entire code base of the application they're trying to support. In many cases, they're trying to support 30, 40, 80 hundreds of enterprise apps.
Almost every enterprise suddenly became faced with a scale of problem that was similar to the inside of a Google-style hyperscaler, where you can't have everybody who has to troubleshoot this thing be experts in all parts of the stack. The stack is just too complicated.
What do you do about that? There are techniques to work your way through that. And these tech stacks are not getting simpler. They're getting more complicated. And this problem is here to stay. But as humans, we have a limit for how much we can actually learn and stay on top of over the course of any given year.
Palmer: Are you guys writing all of these scripts or training these AI prompts on your own? Do you have a community that's doing this? How do you start to build out this type of a model?
Forster: We came up with the business model that I'm really proud of. We have enough economics in it that whenever any of our enterprise customers choose to use a script, we actually pay royalties back to the original author. So it's not open source for the good of open source. It's open source because people get paid for their work.
At this point we have 40 authors in our pipeline that either have our development tools or are waiting for our development tools. And they're all like as you'd expect. They're all kind of expert SREs, expert DevOps, expert platform engineering...some of the original contributors on some of the major open source packages that feel passionately that, “Hey, this package is actually very easy to troubleshoot, if only you know how and I know how.” So they're using our platform to get that know-how out into the world.
We have these scanning tools that can scan a customer's environment and suggest scripts. And at this point, scanning a small- to medium-sized Kubernetes cluster usually will pull up 1,500 to 2,000 scripts that we think are relevant. It's neat. That means that you get massive amounts of coverage without having to write all that stuff yourself.
That's the biggest benefit that a lot of people will see out of this: “Wow, I get coverage that's better than any library run books we've ever had, and I got it all in a couple of minutes, and I haven't even had to write my first script yet.”
Watch the full video for the rest of the conversation between these old friends and colleagues, who also happen to be tech visionaries.