Thanks to all who joined us for the Pulse Secure 2018 Next-Gen Data Center Networking Report Webinar: A Trust Model for Multi-Cloud Networks and Applications Beyond Zero Trust. During the webinar, Pulse Secure Chief Software Architect Rob Koeten introduced a new way to think about security and access, which extends beyond the network to people, devices, and applications. After the webinar, we took questions from the audience but unfortunately ran out of time before we could get to all of the questions. Read the full “A Trust Model for Multi-Cloud Networks and Applications Beyond Zero Trust” Q&A below.
How would you define Software-Defined Security?
Security is a broad spectrum, so when we talk about software-defined security, maybe I can also talk about how we define software-defined secure access within that. In reality, it comes down to how we orchestrate the trust relationship between the requestor and the provider of the service and its information. Part of that is creating a software model and configuration around the (micro-)segments and the resources that exist, as well as the entity that is trying to access these resources.
So we are working toward a model in which we can normalize the context of the requester — i.e., the user or the identity behind the device, the actual application on the client side that is requesting access, and its security posture, as well as the device location and time of day.
And we want to provide that as part of a normalized model that can be associated with policies that represent the software definition of secure access, and then bind that to various resources in a structured fashion. We are working toward a trust model in the form of the software definition, so that we can reuse pieces of that downstream on the server side, in order to enforce those policies and services.
Why is it that zero-trust security models are not suitable for multi-cloud networks? Can you give a bit more detail on that?
I would not necessarily say that they are not suitable. I think it is a reflection of the pendulum swinging from seemingly unlimited access to everything based on access to the corporate LAN, to a zero-trust state with no access granted without explicit access grants to each resource.
The key here is that zero trust literally is not enough, so we have to go beyond a zero-trust model. Instead, in our model we establish different elements and phases of trust. As an example, we identify the user, establish the user reputation, and track that over time. Then, we associate that user trust with a trust assessment of their device either at the time of a resource request, or over a longer period using behavioral analytics. That means that even before we determine or validate trust at the application or resource side, we already had to have established some level of trust around the requestor.
The reason that I use this as my example is that we cannot adopt a pure zero-trust model for every resource request since doing full trust assessment would be time and throughout prohibitive. At some point, you are going to have to establish the trust that lasts throughout a series of requests. In a lot of ways, it’s no different from dealing with personal relationships. If you have not talked to a close friend that you trust, you are going to have to reconnect or re-establish that trust over a period time itself. The same model would apply here, you can’t always approach a friend with “I don’t trust you” at every step — you’re going to have to establish some trust to maintain and build on the relationship.
How does this journey break down into smaller parts or phases? And following on from that, what organizational impact and timeframes should be expected?
Great question, and certainly not a simple question by itself. In terms of an actual breakdown, I think there are three or four parts that we need to take into consideration. And it certainly cannot be the case that you have to complete all parts in order to get value out of adopting a secure access model.
First and foremost, I think it is the breakdown between really understanding and establishing the trust relationships across the client side happens across the layers (User/IoT, client application, the client’s device, and client’s originating network). And I think in a lot of cases that’s somewhat underestimated, so we are simply making strides in parallel with evolving the overall model.
Secondly in parallel, it has to do with the various trust relationships on the right side (service or provider). I think we, as industry solution providers, do a much better job on the trust model between the information that exists in applications (within their micro-segments) and the segmentation of the software-defined network that is applied.
But I think the third key component is to establish a model that binds that trust from one side to another in granular fashion (client to service provider sides). And with the trust mode, we need to create a unified policy model that really establishes that binding.
In terms of breaking that down, you can break it down across the multi-cloud environment that we aim to support. So whether you start with your legacy data center, whether you start with cloud infrastructure — public or private — there are different ways to establish the multi-layer binding. I think first and foremost, the establishment of a common trust and policy model is critical. And that is why we wanted to share our perspective on SDN [software-defined networking], SDP [software-defined perimeter], and SDA [software-defined access]: the building blocks are there, so it’s about which pieces of the building blocks are we tying together.
And last but not least, what we are trying to do is to enhance the analytics and insights around that. And again, that’s something that can be done in parallel over time. Where we may initially just have analysis on the server side, we may also have separate analysis on the client side showing what the user and their devices are doing. Then in a second or third phase, you can start bringing this together to correlate the various client behaviors versus the various application requests and events.
So that’s how I see it being broken down from our perspective, and we have seen how other companies have tried to address that as well.
In terms of timing, as this builds on the existing world, there are core elements that exist today, which address granular access in different segments. I think the first step we need to take is the evolution of our unified policy approach to be able to share that unified security and trust model with third-party vendors. I would expect to see us evolve that to the next level over the next six to 12 months. Meanwhile, in parallel, we can certainly address aspects of the trust model as we go forward.
What role would an application gateway or an ADC (Application Delivery Controller) play in this trust model?
Also a great question. If we go back to the trust model picture, we can see in which layer and in which side the ADC plays: the application layer itself. By virtue of the ADC being in the data path, it provides unique opportunities to not only control the delivery of the application, but also to control the secure access on a fairly granular level — either on Layer 2/3, Layer 4, or Layer 7. This would potentially be integrated with other application traffic or information inspection capabilities. And so this gives us another component — or access nodes in one of the layers in the provider side — that allows us to establish and validate the trust, provide telemetry to verify, and analyze the trust relationships in the application layer traffic. By all means, check out our virtual ADC solution for that.
That is only one part. The other aspect is the ability for us to deploy virtual ADCs [vADC], not only in your data center, but also in virtual or cloud environments — certainly addressing the needs of the Mode 2 agile application delivery. And the other key aspect of a vADC is that it is not a piece of infrastructure that gets layered on top or after the fact; the vADC is another critical component of a next-gen application that a DevOps team deploys and delivers with the application, in either the data center or cloud infrastructure.
And so, as part of the agile delivery of applications, the vADC — and therefore the underlying trust model — can be incorporated into the application delivery itself. So if you already have an orchestration and policy management platform in place, it’s just a matter of hooking that ADC to the orchestration platform, and you get secure access enforcement from the get-go when the application utilizes the virtual ADC.
How would you support micro-segmentation today?
The Pulse vADC specifically has the ability — through various routing, load balancing, and traffic scripting capabilities — to let you define, through scripts and configuration settings, the actual micro-segments and the routing of specific traffic, whether it is based on licensing or the privileged access itself. So the vADC, by its very nature, can grant or block access to different resources by itself. You can segment your application and resources in different forms by the way that it selectively routes the traffic.
Which SecOps tools can you integrate with?
We support a variety of SecOps tools and solutions, including the key on-premises and cloud-based SIEM solutions that our customers may already utilize. In addition, we have basic support for various incident response platforms, either directly or through their SIEM integrations. We are working to expand the support for mutual action automation and analytics-based incident identification.