Microsoft is adding the commercial version of the Docker Engine into Windows Server 2016, continuing the company’s efforts to reach out to include Linux in its plans.
It means that Docker will now run on Windows, but in a way, it’s also a step in Microsoft’s efforts to keep Azure open. Under CEO Satya Nadella, Microsoft has not only embraced the cloud but has also accepted the importance of Linux there. Windows could have continued with its own form of containers, but it’s now embraced Docker.
Today’s keynote, by Executive Vice President Scott Guthrie, also spent a lot of time on the hybrid cloud. Specifically, Microsoft announced monitoring capabilities that span the cloud and on-premises equipment, the key being that no software has to be added or upgraded on the on-premises side.
The monitoring capabilities, which are in technical preview, include some security intelligence. A common theme in security gear lately is to promise a way to sift through the hailstorm of alerts that any network generates, most of which are false positives. Joining that crowd, Microsoft said the Azure monitoring capabilities would include behavioral analysis to identify true breaches.
Azure’s monitoring will also span both Windows and Linux servers.
New Types of Azure
Microsoft also announced the second technical preview of Azure Stack, which extends the Azure cloud and its management to on-premises equipment. In other words, it’s a way to create an Azure-like private cloud inside an enterprise’s own data center. Originally conceived as a software product, Azure Stack is going to debut in the form of hardware appliances, due to be available in mid-2017.
Inside Azure itself, Microsoft announced three new types of instances, adding to the four it already provides.
The H-series provides high-performance computing, focusing on jobs such as genomics or fluid-dynamics calculations. The L-series is storage-heavy and aims to provide low latency to applications such as databases.
Microsoft has the N-series in preview. These instances are run by Nvidia chips (the others are Intel-based), using graphical processing units (GPUs) for tasks such as graphics rendering. One beta customer is ESRI, a data visualization specialist that produces three-dimensional simulations and maps. These are compute-heavy jobs that were previously difficult to run in the cloud, said Bradley Bartz, ESRI’s performance engineering team lead, during an onstage demo at Insight.
GPUs are also being used to power artificial intelligence and deep learning, as Jason Zander, corporate vice president of Azure, pointed out is a separate session this morning.
Microsoft had another AI-related announcement today. The company has deployed FPGAs — chips whose hardware can be reprogrammed — in 15 data centers across five continents. They form a pool of resources that are being tapped as a big AI engine.
But Azure is taking advantage of the FPGAs itself. They’re being used to make decisions for accelerating networking, Zander said, giving Microsoft the ability to reach 25 Gb/s speeds.