Forrester Founder and CEO George Colony has broken the research firm's traditional stance of caution on new technology and urged organizations to act swiftly on generative artificial intelligence (genAI), calling it the most significant technological change in decades. Colony's comments came during this week’s Forrester Security and Risk event.
“Forrester, when a new technology arrives, we typically tell our clients to wait a little bit,” Colony said during his keynote speech. “ But I'm going to break that rule for you here this morning and say that you cannot wait on generative AI … You have to move now. Now is the time.”
Colony noted big technology change usually happens during unpredictable “thunderstorms” and is typically triggered by a change in user interfaces. And genAI meets all these conditions.
Jeff Pollard, VP of principal analyst at Forrester, echoed in his keynote speech, that unlike Metaverse, genAI has all the catalysts of significant funding, beta tests at scale, enterprise building and integrations; as well as dependencies of massive data volume, skilled practitioners and substantial compute power.
GenAI will end WWW, Google search and current connection to trustColony and Forrester Principal Analyst Allie Mellen shared their definition of genAI.
“Generally AI is a technology which lets people converse with big piles of data and enables them to create something new and original from that big pile of data,” Colony said.
“Generative AI is ultimately a very effective predictor or probabilistic model for the next word, or more often referred to as a token in a sequence. [It’s] very, very good at this, especially for very complex sequences,” Mellen said, adding it offers functions such as content creation, articulation and chatbots, and behavior prediction in the future.
Colony shared his “very modest” predictions for the impact of genAI: the end of the World Wide Web, the end of Google/traditional web search, and a revival of and a refocus on and a reconnection to trust.
He encouraged the audiences to deeply invest in genAI, building at least 10 ongoing projects and one promising operational application, and increasing collective and executive IQ in the genAI technology by 2024.
Use cases for enterprises, security teams and attackersDuring the keynote titled “Adapt And Adopt: Balance The Acute Risk With The Burgeoning Reward Of AI,” Pollard and Mellen pointed out several use cases of genAI for enterprises, security teams and attackers, while debunking some of the genAI myths.
Use case #1: Enterprises will adopt genAI the fastest and need oversightColony argues only a few companies will build foundational genAI models using open-source large language models (LLMs) due to the high cost and complexity; instead, most of the businesses will use pre-built models from large tech companies.
Enterprises’ genAI implementations will focus on interacting with these prebuilt models through APIs for such as OpenAI or via Microsoft Azure, Amazon Bedrock Titan, Google Bison on Vertex, HuggingFace and Cohere, which is less expensive than building models from scratch.
Almost all businesses using genAI are expected to build applications on top of these models and fine-tune them for their specific needs, which is already becoming common in marketplaces and app stores. Examples include Microsoft Copilot, Google Workspaces, Jasper.AI and Yext.
“Very few companies are going to need to think about securing the foundational model,” Pollard said. “More companies will need to deal with the API security ramifications of integrating with models. And then everyone out there is going to have to deal with building applications based on models or fine-tuning them.”
Use case #1: Security teams will adopt genAI shortly after the business, dependent on vendor integrationMellen noticed a high level of interest in using genAI within security tools among security teams. “I have never seen more excitement and interest over new technology.”
“We really hope that generative AI is going to be useful for us in this way, that it's something we can cover with and enjoy. But we do need to be careful of and aware of the risks that come along with that,” she said.
Mellen argues there is a common misconception regarding generative AI's role in enhancing the skills of entry-level security analysts. Liken giving a toddler scissors, entry-level personnel may not be able to use genAI safely and effectively without proper training and understanding of foundational cybersecurity principles.
“Ultimately, generative AI and security tools are going to be very useful for those folks who are experienced and more senior level because they have been itching for a tool that is going to help them validate what's happening in the environment and respond faster,” she said.
Mellen also argues chatbot is not the best use case for genAI in security, which should provide meaningful context and enhance security operations.
She pointed out the need for genAI tools to integrate into the analysts' workflows naturally. In the security realm, “SOC [security operations center] analysts want convenience, not conversations.”
Use case #3: Attackers will take the longest, unless it’s nation stateAnother myth is cyber attackers’ broad use of genAI, Mellen said. “Ultimately attackers, excluding nation-states, they're gonna be the last to adopt tentative AI because they don't have to. They're already doing a good enough job as it is.”
Nation-states are expected to use genAI because of their abundant resources. For other cybercriminals, Mellen cited three reasons why they are not using it for hacking and malware creation, as many people have assumed for now:
- Being a cybercriminal is harder than it might seem, and genAI does not simplify this practical difficulty.
- From a technical standpoint, there are significant logistical hurdles to employing genAI for hacking.
- If it ain't broke, don't fix it. Cybercriminals now already have effective methods at their disposal, as indicated by the reports showing the increase in both the number and cost of data breaches.
However, the potential areas where cybercriminals might use genAI include phishing, code writing for malware with malicious GPTs, deep fakes and intelligence gathering, Mellen noted.