Cisco this month published Introducing Responsible Cisco AI, an outline of the artificial intelligence (AI) controls in the company’s product development lifecycle and across its solutions, services, and enterprise operations. 

The Cisco outline explains how the company is specifically meeting the tenets of AI governance covered in Gartner Research's February report, Innovation Insight for Bias Detection/Mitigation, Explainable AI and Interpretable AI. The Gartner report advised “data and analytics leaders must understand responsible AI” to “facilitate understanding, trust, and performance accountability required by stakeholders.” 

Key elements for Cisco's AI governance are "transparency, fairness, accountability, privacy, security, and reliability," according to a company blog. by Anurag Dhingra, CTO for Cisco Webex. The company developed its Responsible AI Initiative, which includes a governance framework, to guide “internal development, application, and use, while providing a vital communication channel with our customers, partners, industry, and related organizations.”

The Responsible AI Initiative includes a guidance and oversight committee led by senior executives across Cisco, lightweight controls implemented within the secure development lifecycle (SDL), and external engagement with governments to “understand global perspectives on AI’s benefits and risks.” 

Understanding AI Ethics

Juniper Networks released research in June indicating enterprise AI adoption has proliferated over the past year, with 63% of the 700 global IT leaders surveyed saying they are at least “most of the way” through their planned AI adoption goals. Almost all (96%) of respondents expect AI will help reduce risk and increase quality within their organizations, and business functions such as networking/cloud, IT infrastructure, and supply chain will benefit most from AI implementation.

But the reality is that only 9% of them consider their AI governance and policies to be “fully mature,” which include hiring a company-wide AI leader or setting up responsible AI standards and processes.

Gartner cites potential reduction in model accuracy, dataset constraints, a lack of standards for “explainable and interpretable” AI, and an absence of bias mitigation as the major risks AI poses for organizations. 

Gartner’s strategic planning assumptions predict that by 2023, personnel hired for AI work across the industry will have to demonstrate an understanding of AI ethics. The research firm also estimated that by 2024, 60% of AI providers will implement measures to mitigate possible harm caused by AI technologies. 

SDxCentral reached out to Cisco about its Responsible AI Initiative, but they were unable to provide comment.