- Analysts are not employed by SDxCentral.
- Views and opinions expressed in analyst content belong solely to the author and do not reflect the views of SDxCentral, LLC.
- SDxCentral does not fact check analyst content. If you believe there is a factual error in analyst content, please notify [email protected] Should we find factual irregularities, that article will be unpublished from the SDxCentral website.
Effective April 18th, 2019, the SDxCentral analyst blog syndication program has been terminated.
SDxCentral Statement about AvidThink, LLC
- Roy Chua, the founder of AvidThink, was a co-founder of SDxCentral. As of September 30, 2018, Roy is no longer affiliated with SDxCentral.
- The views expressed by AvidThink and Roy Chua are independent of SDxCentral and do not represent the views or journalistic principles of SDxCentral.
- As of April 18th, 2019, SDxCentral is no longer publishing AvidThink analyst blogs on the SDxCentral website.
The data center is becoming the hub of an enterprise’s IT, prospects of the organization, whether it’s private, public, or hybrid. If your data center can’t perform, scale, or adjust, then your business won’t be able to do so either. Unfortunately, there are no bulletproof blueprints for how to build, or more accurately, evolve your data center to support all the needs of your business; a problem that is exacerbated by the fact your business is constantly changing.
This flux explains the number of trends we have seen come, go, and come back again (in slightly different form) within the data center over the years. Many, on the surface, seem to be contradictory with one another; for example, we have seen organizations attempt to both consolidate and distribute their data center resources, as well as shut them down (secure) and open them up (with new cloud offerings). The fact is the data center is constantly at odds with itself – on one hand, it needs to deliver the utmost in reliability, stability and security, while on the other it needs to be highly malleable and agile.
To achieve this ‘Jekyll and Hyde’ data center, you must deploy solutions that can help you strike a balance between all these seemingly disparate requirements. While there are many routes you can take to get the functionality you need, there are three common, underlying capabilities you need to look for in any solution you choose to ensure your data center will serve you effectively today and in the future:
- Flexibility – Allowing you to make adjustments and quickly and easily add capacity or move resources to meet changing needs. Ideally, this flexibility shouldn’t add complexity. In fact, look for solutions that can automate changes in response to variances within your data center environment to help you identify and resolve issues in real-time.For example, you may look for solutions that can scale or reroute workloads when a path becomes congested or request additional authentication before granting access when unusual behavior is detected. Having flexible solutions that can adjust to current conditions within your data center will ultimately help you deliver the ongoing performance and reliability you need.Vendors are actively adding capabilities to improve the overall flexibility of their solutions. For example, Cisco continues to add features to Open NX-OS to improve network programmability, Juniper just refreshed its MX series to add automated provisioning and self-healing capabilities, and Red Hat recently acquired automation startup Ansible. You can find more examples of the types of tools available to help you automate the provisioning and management of your data center in SDxCentral’s The Rise of DevOps Report.
- Security – Ensuring effective controls are implemented to protect your resources and support your adherence to appropriate regulations. The solutions should have robust security measures in place (both physical and virtual, as well as within the hardware and software that make up the solution) to protect the privacy and integrity of all your communications and data. This requires a comprehensive approach to security that is capable of mitigating risks through the implementation of controls at every step/phase/layer.For example, when you are looking at infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), or even the newer data center-as-a-service (DCaaS) offerings, it is important to understand the internal controls they have in place (how they protect access to the servers housing your data), as well as the controls they offer (two-factor authentication, encryption, policy enforcement, etc.) to ensure the appropriate handling and use of your resources.The security of all the solutions/services you choose is particularly important to understand as you move to more multi-tenant, hybrid cloud environments. If you can’t dictate how data is handled and stored (e.g. ensure a virtual machine will only run in a specific geographic boundary), you may find yourself out of compliance with data sovereignty regulations, such as those that govern your ability to do business in almost every member of the European Union (EU).Dozens of new startups are coming on the scene with security solutions targeted at cloud data centers. These include Bromium, CipherCloud, CrowdStrike, Cylance, Hytrust, vArmour, and Skyport Systems, among many others. In addition to the startups, there are a host of established vendors as well as newly minted IPO companies with security technology targeted at the cloud, including Checkpoint, Cisco , Intel , Qualys, Palo Alto Networks, and Proofpoint. Check out the new SDxCentral Security Directory for information on security companies, products, projects and services.
- Interoperability – Allowing you to keep your options open, so you can incorporate new technology platforms, cloud services and converged infrastructures as they become available. Ensuring solutions do not close you off to innovation is critical to keeping your data center environment current. You should look for solutions that ‘play nice with others’ and steer clear of those that can only deliver an optimized experience if you deploy all ‘their’ infrastructure.Many cloud providers, such as Amazon and Microsoft, are built on proprietary technologies and thus make it challenging manage resources across multiple clouds. These vendors have little incentive to allow customers to manage their competitor’s clouds and a general lack of standards means it is tougher than it should be. At a minimum, solutions should support application programming interfaces (APIs) or offer a web-based portal to simplify provisioning and make it as easy as possible for you to roll out capabilities and optimize your environment.Enterprises and service providers have pushed for more open, standards based solutions, giving rises to the OpenStack and network functions virtualization (NFV) movement toward more open platforms and standards. Open source projects in general are growing in popularity.
If you choose solutions with these three characteristics, then no matter the route your data center takes, you should be able to make the most of your resources. You should be free to take advantage of the latest data center trends – deploying high density, highly virtualized servers to improve the efficiency and scale of your environment; moving applications to the edge to better address the neds of local offices and customers; deploying micro-segmentation to minimize the impact of advanced cyberattacks; finding the right mix of on-premises, private, and public cloud resources. So, when you are evaluating solutions/services, look for those that deliver the foundational flexibility, security and interoperability you will need to create an infrastructure that can fuel your success now and long into the future.