Microsoft continues to embed artificial intelligence (AI) — most notably OpenAI’s GPT-4 — into its portfolio, and this time for security. The company unveiled Microsoft Security Copilot, which it claims is the first security product that is built on the latest innovations in large language models to support all levels of human analysts.
“For all of the millions of dollars security vendors spend talking about AI for cybersecurity, Microsoft is the first to make AI for security operations real with Security Copilot,” Forrester Senior Analyst Allie Mellen and VP and Principal Analyst Jeff Pollard wrote in a blog. “It uses generative AI to aid human analysts (but not replace them) in investigation and response.”
Microsoft chairman and CEO Satya Nadella touted the Security Copilot as “the next big step forward into the new world of cyber operations that augments every security role by combining advanced AI models with security-optimized infrastructure, threat, intelligence, and skills.”
“This we believe will open doors to entry-level defenders at a time when cybersecurity workers are sorely needed, empower highly skilled analysts to focus on the next level of cyber risks, and transform every aspect of the SOC [security operations center] productivity from threat detection to hunting to incident response,” Nadella said during this week’s Microsoft Secure event. “Together, we can give the agility advantage back to defenders to build a safer and more secure world.”
The new AI tool offers a variety of capabilities, including human-readable explanations of vulnerabilities, threats, and alerts from Microsoft’s own security products and later from third-party tools; answering questions about the enterprise environment and the incidents using natural language; summarizing incident analysis and offering recommendations on next steps; enabling users to edit the prompt to correct or adjust the responses, and share the findings with others.
Vasu Jakkal, Corporate VP of compliance, identity, management, and privacy at Microsoft Security boasted these capabilities can simplify the security incident response time from hours or days to minutes; catch weak signals hiding behind noise that human analysts might miss; and address the talent gap.
The Security Copilot is a separate offering from the existing Microsoft Security portfolio, but is poised to become the connective tissue for all its security services and plans to integrate with third-party products and data in the future, Forrester analysts noted.
It is supported by Microsoft’s global threat intelligence and the 65 trillion threat signals the company sees every day from its acquisitions like RiskIQ and Miburo and integrates with Microsoft Sentinel and Microsoft Defender, the vendor claims.
“Microsoft Security Copilot is the first and only generative AI security product that builds upon the full power of GPT for AI to defend organizations at machine speed and scale,” Jakkal said.
A Game Changer for AI in SecurityForrester analysts pointed out that currently, the security industry is inundated with misleading marketing claims such as "autonomous SOCs," "AI assistants," and "AI analysts." However, Microsoft Security Copilot finally made AI do more than enhance threat detection.
“While other security vendors were marketing, Microsoft poured billions into OpenAI, locked the company in by offering it Azure compute credits, kept innovating itself, and will likely lean into its route to market via enterprise bundling,” they wrote. “This is the first time a product is poised to deliver true improvement to cybersecurity investigation and response with AI.”
This announcement signals a shift away from an era where AI's role was limited to detection and toward a new era where AI has the potential to improve one of the most important issues in security operations: the Analyst Experience (AX).
Microsoft announced plans to invest $20 billion over the next five years on cybersecurity in security in 2021. “Whether or not Microsoft reduces – or never reaches – the 2021 number for the rest of the security industry, this is a painful reminder that Microsoft is continuing to eat their lunch – not just in the enormous success of its security business, but now with its innovation potential,” analysts wrote.
The Limitations of Microsoft Security CopilotFirst of all, the new tool is still in private preview and remains in development. Microsoft has announced a general availability date yet.
“The more valuable data Security Copilot can take in, the faster it will learn and the more useful it will become,” Forrester analysts noted. “But its utility is also constrained by the same limitations that exist for security teams today: poor visibility and bad situational awareness will limit its impact.”
Secondly, in the demo, Microsoft executives showed the Security Copilot can be wrong using an example that it responded with an answer that includes Windows 9 which doesn't exist.
“AI-generated content can contain mistakes,” Jakkal said during the demo. “But Security Copilot is a closed loop learning system, which means it's continually learning from users and giving them the opportunity to give explicit feedback.”
“Trusting AI is harder when the first launch of the product shows it saying something wrong about the company that developed it. While the broader message of being able to correct the AI is productive, ensuring accurate results becomes incredibly important when leveraging the technology to train new analysts,” the Forrester analysts echoed.
“While we agree with Satya Nadella that this is an iPhone-level moment in technology, it’s worth remembering that the first iPhone didn’t have an App Store, couldn’t multitask, and was locked to one carrier. Security leaders can’t sleep on technology like this because of our naturally ingrained – and constantly reinforced – technology skepticism. But if Copilot doesn’t make security pros faster or better, engagement rates will plummet once the novelty wears off,” they concluded.