Red Hat at its recent Summit 2024 event went heavy on integrating artificial intelligence (AI) across its portfolio, moves that can’t be quarreled with seeing as everyone is doing the same.

However, comments from Red Hat executives during question-and-answer sessions as part of the event did leave me with a few more of the former around whether AI is ready to steer mankind’s future.

Taking a step back, Red Hat’s AI moves were around its Lightspeed, OpenShift AI and Podman AI Labs products, which company executives said are core to the vendor gaining a foothold in the rapidly expanding AI universe.

“The momentum around AI is not slowing down at this point,” Steven Huels, VP and GM of the Red Hat’s AI business unit, said during a press briefing ahead of this week’s Red Hat Summit 2024 event. “The pace of innovation is probably only increasing.”

Huels “probably” qualification seemed quaint as analyst firms expect AI innovation and adoption to explode over the next several years.

Just to pull one random number, IDC late last year predicted the worldwide AI software market will grow at a 31.4% compound annual growth rate over the next several year, surging from $64 billion in revenues in 2022 to nearly $251 billion in in revenues in 2027.

My eyes typically glaze over whenever I see numbers that are followed by “billion,” so I will not quibble with just how big this market will be. However, those number would seem to indicate that adoption is happening as we speak and answers from Red Hat executives left me feeling that maybe we are putting the AI cart before the analog horse.

AI questions, answers and more questions

Ashesh Badani, SVP and chief product officer at Red Hat, and Chris Wright, CTO and SVP of global engineering at the firm, held a Q&A session with analysts and media during the event where they were peppered with more than a few questions around the security and sanctity of data being used to train AI models, or as it was described: “structured tuning.”

Both executives gave high-level answers as to how AI models, and Red Hat’s use of them specifically, are constructed to provide the data needed to train AI models. These were mostly boiler plate-type responses around ingesting huge amounts of data from the internet and then putting in model parameters to tease out data specific to a particular use case or need.

Most of these early use cases have been around time-intensive, though relatively low-impact services like customer support trees and disturbing artistic creations. But, the amount of money enterprises are investing and going to invest in AI will require a greater return than hopefully reducing my hatred for interacting with customer service systems and my further confusion over what is art.

Many have noted that more oversight is currently needed to support the necessary higher-level of output data from these AI models, and that’s something Red Hat is working through with its recently launched InstructLab open source project.

“The way we use InstructLab has a human-in-the-loop process as well as an expectation from a community,” Wright explained. “So, when you build technology in a community, you kind of create community expectations around how are you going to build and deliver that technology at a community level. That same expectation translates very well into the development or addition of skills and knowledge into a model.”

Wright likened this process to a pull request from traditional open-source repositories, and then attempted to simplify the struggle of advancing these use cases with what he even said was “maybe not the best analogy.”

“You could kind of draw an analogy to what happens in a place like Wikipedia, where there's a review process of content that goes into this content repository or this collection of knowledge and information,” Wright said.

However, when focusing in on the initial question of how these data repositories can be managed in a way to root out malicious synthetic data or data poisoning – terms Red Hat CEO Matt Hicks threw around in a later solo Q&A session that sound fun when thought about in a theoretical sense, but concerning when applied to the real world – Wright’s answer left a vague feeling.

“In any context, a tool can be used in a variety of ways, some very positive, some very negative,” Wright said. “We're building tools here and the community process is a part of that, the broader guardrails are another part of that, and I don't think we're entering into a realm of total chaos and anarchy. I think we're just introducing a broader set of people the ability to contribute their skills and knowledge to a shared knowledge base.”

I don’t think Wright meant that answer to be vague, but Badani undercut that hopeful feeling by adding the model of having a human in the loop to help “that’s not foolproof though.”

Is AI in charge?

Don’t get me wrong, I am all for technological advances making life easier. I am also constantly enamored with folks like Wright and his colleagues across the ecosystem that are driving this path of innovation. I probably also have an irrational soft spot for Red Hat and its executives so whenever I hear people like Wright talk about the future I typically refrain from my usual mid-session nap.

However, Wright did make one statement toward the end of the session that left me thinking maybe we are starting to veer toward this altruistic future a bit too quickly.

“This is a big and complicated space,” Wright said of the AI ecosystem. “This time next year we'll be having a similar conversation, but we will have sorted out one half of the security concerns and be moving on to the next ones, so it's a very fast-moving space.”

I am not questioning people who are obviously much smarter than myself as to just how safe, secure and ready-to-trot AI is because that is exactly what I am doing. But, if AI is indeed being integrated into everything as quickly as analyst forecasts and the mound of press releases we receive would indicate, wouldn’t it make more sense to work through more of those “security concerns” today instead of waiting 12 months?

I know it’s too late to slow down this rapidly accelerating AI cart, so I guess I can only hope Wright’s comment that “we will have sorted out one half of the security concerns and be moving on to the next ones” at this point next year is at least half right. Or at least hope there enough patience in the market to make sure AI is safe and secure enough to lead?