Plenty of startups claim to use machine learning to advance security. A company called Deep Instinct claims to go further, applying deep learning to create a machine that operates more like human intuition.

The result is a security system that can recognize zero-day threats on its own, by building its own sense of what's normal or malicious, the company claims.

Deep learning is a fairly recent step in the march toward artificial intelligence. Machine learning requires an expert to step in at some point and identify the features that the system should pay attention to. Deep learning doesn't need that level of supervision.

That's the way human minds operate, too. "We learn things, and then we just know them," said Maya Schirmann, Deep Instinct's chief marketing officer, at the recent Black Hat conference.

Deep Instinct claims to be the first startup applying deep learning specifically to cybersecurity, but it probably won't be the last.

Deep learning has already become famous; it's inside AlphaGo, the Google system that can beat humans at the game of Go. And at least one other deep learning startup has emerged: Nervana Systems, which is applying graphical processing units (GPUs) for now and hopes to develop its own deep learning chip.

Nervana isn't specializing in security. But like Nervana, Deep Instinct is using GPUs to produce what it describes as an artificial brain.

That brain was trained by being exposed to hundreds of millions of files: applications, PDFs, just computer files of any type. About half were benign, and half were malignant. The process took about 24 hours, Schirmann says.

Some human intervention was necessary during this first step, just as it is with a human brain that's early in development. Humans told Deep Instinct's AI which files were good or bad — but what distinguishes deep learning from machine learning is that the brain wasn't instructed which features to watch. Based on what it knew about the "good" and "bad" piles, it began drawing its own conclusions about what a malicious file looks like.

Deep learning can be a lot more accurate than machine learning. But beyond accuracy, deep learning can spot threats that haven't specifically been seen before, based on similarities to previous attacks. So, the startup is emphasizing its ability to intercept zero-day threats.

Schirmann uses the analogy of face recognition: If a face is partially obscured in a picture, humans and deep learning can intuitively recognize it as a face, whereas machine learning might have trouble. "It's still a face, even though it might be at an angle you've never seen before," she says.

Customers get access to the artificial brain via a small agent (a few megabytes, Schirmann says), which goes on devices such as smartphones or laptops. Updates to the agent are available once per quarter.

It's also possible to apply Deep Instinct in non-real-time forms as well, using the artificial brain in appliance form. The company also plans to offer its services to cloud access security brokers (CASBs), which enforce security policy on enterprise cloud applications but don't analyze threats. Deep Instinct would provide that analysis. The startup has at least one such partnership in place, with FireLayers.

Based in Tel Aviv, Deep Instinct is roughly two years old, with about 65 employees. The company has raised funding through Series B but isn't disclosing details.