It seems the entire enterprise technology marketplace has been swept up in a frenzy for all things artificial intelligence (AI). Startups are popping up almost daily, each with a “unique” solution to one or more rather specific problems, such as a chatbot capable of understanding millennial slang. Hardware manufacturers are likewise scrambling to capitalize on the global hunger among hyperscale public cloud platform players for AI acceleration chipsets. And these same cloud providers have gone “all in” on AI, rolling out a constant stream of development frameworks and modeling tools capable of tackling a very wide range of use cases, both those currently known and those not yet imagined.
The first and most obvious challenge that accompanies this kind of wild abandon is an acute overabundance of choice. The single decision of purchasing a chatbot, for example, is accompanied by numerous, interdependent choices spanning hosting, database, language corpus, client integration, etc. The same holds true for the very building blocks of AI itself, such as deep learning (DL) frameworks, where buyers have an embarrassment of riches at the moment with available options from Microsoft CNTK, Google TensorFlow, Caffe, Karas, and MXNet, to name a few.
But these are fairly common IT, “best of breed” problems. And thanks to ideas like containerization, virtualization, and open source, even longstanding concerns over interoperability and vendor lock-in are no longer major, limiting factors. No, the real challenge with AI starts long after the initial technology decision has been made. The truly difficult hill to climb for organizations seeking to digitally transform using AI begins the minute the first successful AI algorithm and data model finds its way into production.
Always Be Improving
Successful AI solutions are in fact living entities that are constantly in flux and demand constant management and revision of their constituent data sets, data models, and algorithms. Data models are trained and retrained. Algorithms are refined. And data sets are expanded continuously, forming a virtuous cycle of improvement over time.
Unfortunately, there’s really no glue holding all of these elements together — the data, the models, the algorithms, even the underlying hardware. Worse, each of these depends on very different development and engineering roles, each operating in complete isolation. In this way, AI solutions are currently highly siloed endeavors, both while being built and while in production.
That’s where traditional software development practices come into play. What AI needs is a touch of application lifecycle management (ALM) coupled with a unified architecture that allows developers to work with the entire hardware and software stack. This demands a solution that covers planning, design, development, testing, deployment, maintenance, and most importantly, ongoing revision. You see, software, whether it’s a tool for booking a flight or predicting stock market “corrections” must be considered as a living system.
If we incorporate tasks like data model training and refinement into the bigger picture of a complete, living solution we can do far more than the above-mentioned siloes of expertise can. We can create a virtuous cycle of improvement for AI, not in isolation, but in actuality and in real-time.
Again, this is not an unheard of concept within traditional software development circles. To traditional ALM practitioners, these notions are simply necessary steps along the way to the vaunted outcome known as “continuous delivery.” You’re not simply engineering a dinosaur and keeping it alive with daily food rations, you’re continuously reinventing and improving that dinosaur, day in and day out. And that takes all elements of the AI solution working in concert.
Where To Start With AI
Back to more practical matters, for potential buyers looking to capitalize on AI right now, the lesson from all of this quite simple: Don’t get caught up in the individual components of AI itself. Rather, look for technology partners that understand software development practices and how those might apply to the realm of AI. Right now that’s a fairly small field with vendors SAP, IBM, and Microsoft leading the charge in handling most of these issues.
But in terms of sheer scope and audacity, there’s one vendor truly looking at the AI opportunity as an ALM problem. Perhaps surprisingly that vendor is the Chinese tech giant, Huawei. It’s very recently introduced AI portfolio (touted as full-stack, all-scenario endeavor), begins with the company’s own set of AI Ascend chips (also recently introduced) and ends with a set of pre-integrated industry solutions. Here’s an eagle-eyed view of the entire portfolio.
- Infrastructure: Devices, edge equipment, servers, converged systems, and cloud services featuring various AI-specific AI chips such as Huawei Ascend.
- Infrastructure Orchestration: Huawei’s Compute Architecture for Neural Networks (CANN) offering, which is where developers can assign specific tasks to specific chips (Ascend, GPU, CPU, etc.).
- Model Execution: A unified training and inference framework, branded as Huawei MindSpore, which supports the usual suspects (TensorFlow, PyTorch, PaddlePaddle, etc.)
- Model Development and Management: This is where the rubber meets the road for developers with a set of open APIs, pre-integrated solutions, a marketplace for those solutions, and a one-stop model development, deployment and revision workshop — branded as Huawei ModelArts.
Will this all-inclusive, holistic approach dominate the market? Certainly geographically, Huawei will still face the same challenge in reaching the North American market, but in terms of providing customers with a solution-complete entry point to the AI opportunity, I think the vendor has made a strong statement with this release – one that emphasizes the truth that AI isn’t just a collection of fancy machine learning libraries but is instead a facet of a much larger, living system.
ANALYST COLUMN DISCLAIMER
Statements and opinions expressed in articles, reviews and other materials herein are those of the authors; not the editors and publishers.
While every care has been taken in the selection of this information and reasonable attempts are made to present up-to-date and accurate information, SDxCentral, LLC cannot guarantee that inaccuracies will not occur. SDxCentral will not be held responsible for any claim, loss, damage or inconvenience caused as a result of any information within this site, or any information accessed through this site.
The content of any third party web site which you link to from the SDxCentral site are entirely out of the control of SDxCentral, and you proceed at your own risk. These links are provided purely for your convenience. They do not imply SDxCentral’s endorsement or association. The copyright and any other intellectual property right any third party content belongs to the author and/or other applicable third party.