This is the second installment of Deloitte on Tech. Each month, a leader from Deloitte will cover a topic of high interest to SDxCentral readers.
Organizations gain a competitive advantage when they place leading-edge technologies at the forefront of their approach to business issues. Generative artificial intelligence (AI) marks a significant departure from the AI we’ve come to know. Large language models (LLMs) can respond to user prompts with natural language outputs that convincingly mimic coherent human language. Moreover, there is effectively no barrier to using some of these models because they do not require any knowledge of AI, much less an understanding of the underlying math and technologies, making generative AI a tool for all mankind.
Naturally, there is growing intrigue around applications for generative AI in the business realm. As with all cognitive tools, the outcomes will depend on how they are used, and that includes managing the risks. Trust can be critical to doing business and elevating experience, as it’s hard to gain and extremely easy to lose. The question must be asked: Can business users trust the outputs of this kind of AI application, and if not, how can trust be achieved?
New bots on the block To this point, AI has broadly been used to automate tasks, uncover patterns and correlations, and make accurate predictions about the future based on current and historical data. Generative AI is designed to create data that looks like real data. Put another way, generative AI produces digital artifacts that appear to have the same fidelity as human-created artifacts. The operative word here being “appear.”
CIOs and technologists know that generative AI is not “thinking” or being creative in a human way, and they also likely know that the outputs are not necessarily as accurate as they might appear. Nontechnical business users, however, may not and may consequently put misplaced trust in the outputs.
Trust is not an inherent quality of AI but instead the product of AI governance, risk mitigation and the intentional alignment of people, processes and technologies across the enterprise. Technology designed and used with ethical intent can help people and organizations pursue achievement and prosperity with more confidence and peace of mind. As enterprises wade into the fast-moving field of generative AI, there are factors of trust and ethics that should be addressed with the following actionable tips:
1. Educate around hallucinations and misinformation A generative model references its dataset to concoct coherent language or images, which is part of what has startled and enticed early users. The phrasing and grammar of natural language programs may be convincing, while the substance may also be partially or entirely inaccurate.
One of the risks with this kind of natural language application is that it can “hallucinate” an inaccurate output in complete confidence. It can even invent references and sources that are nonexistent. The model would be forgiven as its function is to generate digital artifacts that look like human artifacts. Yet, coherent data and valid data are not necessarily the same.
A fundamental risk is that users may place complete confidence in erroneous or biased outputs and make decisions and take actions based on a falsehood.
One way to help mitigate this risk is through AI governance. Many of the leading practices associated with other kinds of AI also apply to generative models: workforce upskilling, waypoints for decision-making across the AI lifecycle, structured oversight, ubiquitous documentation, and the many other activities that promote Trustworthy AI™.
2. Implement checks against plagiarism Generative AI outputs align with the original training data, and that information comes from the real world, where things like attribution and copyright are important and legally upheld. Yet, even if a model does cite accurate source information, it may still present outputs that obscure attribution or even tread across lines of plagiarism and copyright and trademark violations.
How do we contend with attribution when a tool is designed to mimic human creativity by parroting back something drawn from the data it computes? If an LLM outputs plagiarized content and the enterprise uses that in their operations, a human is accountable when the plagiarism is discovered, not the generative AI model.
Recognizing the potential for harm, organizations may implement checks and assessments to help ensure attribution is appropriately given. Yet, if human fact-checking of AI attribution becomes a laborious process, how much productivity can the enterprise actually gain by using generative AI?
Finding the balance between trust in attribution and human oversight will be an ongoing challenge, with significant legal and brand implications for the enterprise.
3. Foster real transparency and user explainability Today’s generative AI models often come with a disclaimer that the outputs may be inaccurate. That may seem like transparency, but the reality is that many end users do not read the terms and conditions and do not understand how the technology works, causing the LLM’s explainability to suffer. To participate in risk management and ethical decision-making, users should have accessible, nontechnical explanations of generative AI, its limits and capabilities, and the risks it creates.
Enterprise-wide AI literacy and risk awareness is becoming an important aspect of any company’s day-to-day operations. This is perhaps even more important with generative AI.
To promote the necessary AI understanding, CIOs and business leaders may look to existing workforce training and learning sessions, explanatory presentations to business users, and fostering an enterprise culture of continuous learning.
AI accountability on the road ahead At scale, the possibility of transparency with generative AI becomes elusive and “keeping the human in the loop” becomes a growing problem. It is also unclear at this point the degree of consequences that may result from mass adoption of generative AI, such as the proliferation of fake facts to the detriment of objective and complete truth. And yet, these challenges are unlikely to hinder generative AI’s adoption.
No matter how powerful it becomes, we still need the analysis, scrutiny, context awareness and the humanity of people at the center of our AI endeavors. We’re committed to leading the dialogue on trustworthy and ethical technology as a widely accepted social and institutional priority so that more people can enjoy privacy, security, and the equitable benefits of a tech-savvy world.
This AI era is a time where humans work with machines to achieve something neither could do independently. Now is the time to derive viable methods of accountability, trust and ethics, linking the generative AI product and its outcomes with its creator, the enterprise.