SAN FRANCISCO – Corporate CISOs, like everyone in today's enterprises, have to determine their own relationships with artificial intelligence (AI) as it enters mainstream IT. Because AI brings so much more data into a system and contributes to new attack surfaces, is it within the CISO's jurisdictions, should its use be left to the data team, or is use of AI a business decision the C-level team makes on how to incorporate it?

Most likely, it's all of the above at most corporations. Still, somebody has to own and steer the use of AI. Who that somebody should be was a topic discussed by a group of CISOs at last week's RSA Conference. That group discussion include Dale Zabriskie, field CISO at Cohesity; Omar Khawaja, field CISO at Databricks; Cathy Polinsky, CTO at DataGrail; Greg Notch, CISO of Expel; and Jason Mar-Tang, field CISO at Pentera.

Why CISOs often struggle to align with business AI “The reason most CISOs get concerned about AI is when their businesses get excited about it,” Khawaja of Databricks said. “So the bigger root cause is the CISOs' lack of ability to align to the pace of the business. That's really the big trouble, so a lot of CISOs are just shying away and letting the business and the data team go do their own thing, because they're not quite sure how to do AI and they'd rather not be involved, especially if they're not going to be able to do it well.”

When a discussion about using AI is brought up in a meeting, CISOs should have a predetermined point of view about how they will approach it, the CISOs agreed.

“I always start the conversation with, 'All right, how do we say yes to this, if we possibly can?'” Notch of Expel said. “And if you show up like that in the conversation, you're gonna get insight into more conversations, whereas if you show up and say, 'Hey, let me just introduce you to my GRC [governance, risk, compliance] team and we'll 3-PA (third-party authentication) this for you. You're gonna get a different response from your CEO or your CTO.”

“Moving from the 'Department of No' to the 'Enabled Department' goes a long way to not just enabling business, but helping from a user-base perspective,” Zabriskie of Cohesity said. “For those individuals that may feel like, 'Why do I have to log into MFA every day?' or whatever, if they understand the 'why' behind it, you're enabling that helps reduce that threat vector as well.”

Successful IT security organizations, those who are on the technical side, have figured out how to speak the language of their expertise to their business people, Zabriskie said.

“Instead of showing up speaking about their experience, they figured out how they impact the business because their job and one job is to keep the revenue-generating pipelines up and running,” Zabriskie said. And once they understand that, and they couch what they say in those terms, now they get buy-in, they get a seat at the table, and they get engaged, because it's all about the business. It's not about the fact that I've got X many hits on the network.“

Keeping proper security in place ahead of key business decisions CISOs and executive teams are struggling to keep up with business decisions being made before proper security measures are in place. When a company is hit by an attack, everybody has to be on board for the response – especially the CISO. The hackers will talk to the CEO or CIO, and while they are the ranking decision-makers, they're not going to have all the information in front of them for making the correct response decision.

”We ran through this in a ransomware workshop for Johnson & Johnson, Ford Motor and JPMorgan Chase just last week,“Cohesity's Zabriskie said. ”And there's a common theme that comes out of the experience: Every single group that comes out says we do not have all the (right) people in the room when we talk about business resilience or recovering from a ransomware attack. This just opens their eyes so much.“

CISOs need to develop an AI risk framework outlining common risks, controls and mapping of responsibilities between organizations and third party providers. AI can help automate threat analysis and data handling.

”Those (AI) tools don't work without the people's ear,“ Notch said. ”It's optimizing decisions, you're helping SOC analysts make better decisions with better context more quickly, without having to do a lot of extra work. And then eventually, if you watch enough of the decisions they make, you take those kinds of decisions off their plate because AI can learn what a good decision is, and then effectively move the decisions that a SOC analyst makes to move the value chain closer to your business.“

Looking ahead to a year from now A year from now, after the AI dust has settled, what will CISOs be talking about at RSAC?

”From a threat landscape perspective, that's where AI will continue to be a huge thing,“ Zabriskie said. ”We'll see the attackers become more efficient, ransomware as a service will really take off and then there'll be many more attacks.

“We're just starting now to see fewer encryption (attacks) and less focus like that. Organizations that have high reporting responsibility will have their data changed – not just encrypted, but actually changed, so that they can trust it. As we talk about the integrity of data, the real challenge for organizations is trusting the data that they're restoring from. That's what we're (Cohesity) really focused on: Can you take that data set and put it where you know you have integrity around it?”

Khawaja said that “I think all of this should drive companies to try and hopefully, finally, be more proactive. And say that we need to see more (round)tables like this more generally – not only just for the big banks, big enterprises and corporates because they have to, but because adversaries or using a higher standard or whatever, we'll see more of a push in general. Everybody benefits from that.”

Notch said that “automated exploitation will be a big topic. We're already seeing it, because effectively there are companies that now do it to help other companies. And I think identity attacks are going to be like true impersonation attacks using deep fakes; we've already had fraudulent hiring, like people showing up in Zoom rooms that aren't who they say they are. All of that stuff is too scary.”