Virtualization is like a hot pocket—it can offer you an amazing experience, but can also burn you if not properly prepared. Yes, virtualization offers unparalleled agility, flexibility, and cost savings, but it also presents complexity-driven challenges. The reason for this, is a lack of granular access to application traffic in increasingly distributed virtualized environments, which can compromise application performance or security due to the blind spots it creates.
For instance, the complexity generated by a business data exchange that occurs between on-premises applications running virtually at disparate locations, makes it difficult to visualize, identify, and predict network outages, or analyze mission-critical application performance. Put simply, not knowing the state of the inside could land you in a critical situation.
But there is no stopping the shift toward digitization. The industry as a whole is at an important inflection point as globalization, the Internet of Things (IoT), the cloud, virtualization and mobile devices are forcing companies to extend their networks past the traditional perimeter. Knowing this, it is time to increase focus on better visibility to avoid problems that are sure to arise down the road.
Why Now? Everyone’s Hungry
Budgetary constraints, technical limitations, security concerns, and performance issues are preventing organizations from moving their entire IT infrastructures to the cloud. Most enterprises are, therefore, implementing a hybrid enterprise model, which enables them to retain control of their IT environments, while sending a mix of critical and non-mission-critical workloads to the public cloud.
In this process, external cloud-based applications can be a mix of software-as-a-service (SaaS) applications and customer-developed applications running on external infrastructure-as-a-service (IaaS) platforms. And so, obstructed visibility becomes an issue. According to a recent Gartner research note: “Lack of visibility proliferates due to increasing use of cloud-based apps, encryption, and general network expansion. Moving forward, IoT will add additional visibility challenges.” This prompts the growing need for end-to-end visibility and security in an increasingly complex virtualized world.
Further, consider that as IT decision makers continue working to implement and manage viable hybrid networks and environments, they operate in a business environment where application and network performance is essential to generating revenue and maintaining customer relationships. Access to critical application data in these virtualized networks and hybrid cloud environments through monitoring tools becomes essential to ensuring the reliability, security, and performance of mission-critical applications. It is especially important to have granular access to application packet data in case of an event requiring further troubleshooting and fault analysis.
How to Know When It’s Just Right
According to Cisco, about 76 percent of traffic is east-west within the data center. Thus, copying raw packet data for continuous analysis is not practical in most cases due to the sheer amount of information that would need to be transferred from the site where it is accessed to a remote monitoring host. But getting the right information to the right tool, in this case, is still critical—requiring filtering and grooming of virtual traffic of interest at the source to avoid undesirable network congestion. This can only be done by ensuring pervasive access to critical data across all physical and virtual environments throughout the service delivery path—branch office, virtualized data center, private cloud, telecommunications network core, and even the public cloud. But in a way that avoids tool overload.
For starters, organizations can select virtual monitoring solutions that can report back and offer continuous monitoring of the health of the network and application quality of experience. If there’s an event that requires troubleshooting, organizations can then get the raw packet data and backhaul it via a GRE-encapsulated tunnel for further processing and dissemination to analytics tools. In this scenario, organizations would have “virtual access” to east/west traffic for insight into application, database, and web service communications.
But this needs to be done while limiting the amount of data sent to monitoring tools. Filtering features of virtual taps can be helpful in executing on this. Further, if there is too much traffic to backhaul every packet, virtualized visibility platforms can generate continuous NetFlows feed on most east-west traffic, while still allowing for a deeper dive into packet data when needed. In this case, physical or virtual packet brokers can be used to perform advanced data processing and load balancing, as well as advanced application identification, geographic location, Secure Sockets Layer (SSL) decryption, and NetFlow generation.
Ultimately, organizations can have virtualized data access and intelligent visibility in any virtualized environment by leveraging the right tools to create an integrated, virtualized visibility platform. This eliminates network visibility blind spots throughout the virtualized and physical environment, making existing monitoring tools more effective. Organizations then have the application and network visibility, as well as intelligent data processing, that makes their applications and networks stronger—increasing their confidence in running mission-critical workloads and services in a mixed cloud environment.
Can your organization already do this? If not, it should get working on it or risk falling behind other businesses currently keeping tabs on their now highly virtualized environments. Going back to the hot pocket, you don’t want it too hot or too cold—so don’t blindly bite down until you know it’s just right.