- Analysts are not employed by SDxCentral.
- Views and opinions expressed in analyst content belong solely to the author and do not reflect the views of SDxCentral, LLC.
- SDxCentral does not fact check analyst content. If you believe there is a factual error in analyst content, please notify email@example.com. Should we find factual irregularities, that article will be unpublished from the SDxCentral website.
Effective April 18th, 2019, the SDxCentral analyst blog syndication program has been terminated.
SDxCentral Statement about AvidThink, LLC
- Roy Chua, the founder of AvidThink, was a co-founder of SDxCentral. As of September 30, 2018, Roy is no longer affiliated with SDxCentral.
- The views expressed by AvidThink and Roy Chua are independent of SDxCentral and do not represent the views or journalistic principles of SDxCentral.
- As of April 18th, 2019, SDxCentral is no longer publishing AvidThink analyst blogs on the SDxCentral website.
SHANGHAI — At the Huawei Connect 2018 event here last week the theme was all about artificial intelligence (AI). With 25,000 of Huawei’s customers, prospects, and partners in Shanghai, Huawei drove an all-AI agenda to tout its R&D might. It was impossible to miss the theme of the conference — it was plastered across buildings and TV screens and was featured in collateral and on entry badges.
Taking on Google, Nvidia, IBM, and More
As part of the keynotes, Huawei unveiled its latest AI agenda starting with a set of AI chips with specifications besting Google’s own powerful AI TPUs (Tensor Processing Units) and Nvidia’s AI chips. Huawei’s top-of-the-line Ascend 910 and low-power 310 chips are 7nm-based chips built on its homegrown Da Vinci AI architecture. The Ascend 910 delivers up to 256 TeraFLOPS for deep learning compared to Nvidia’s most powerful GPU Tesla V100 at 125 TeraFLOPS and Google Cloud’s TPU at 180 TeraFLOPS. Huawei also announced a clustered system consisting of 1024 Ascend 910 chips that achieves 256 PetaFLOPS (available Q2 2019).
On the lower end, the Ascend 310 is designed to provide 8 TeraFLOPS but consume only 8W of power, making it ideal for embedding into edge devices and into other compute devices.
To demonstrate its software might, Huawei walked us up the stack above those chips, unveiling its Compute Architecture for Neural Networks (CANN), an operator library for chipsets that improves developer efficiency in working with its AI chipsets, its new machine learning (ML) framework, MindSpore, ModelArts, and its new ML Platform-as-a-Service (PaaS) offering.
Huawei is essentially taking on the AI giants of the world, from Google and IBM, to chipmakers like Nvidia. When one of the largest ICT (information and communications technologies) companies in the world demonstrates a chip that ostensibly outperforms Nvidia and Google’s current top chips, it’s hard not to pay attention.
However, there is a key difference between Huawei’s AI initiative and that of Nvidia, Google, or IBM. And that difference lies in the expansive exhibits on the two-levels of expo show floor at Connect. Huawei spared no expense in demonstrating the breadth and depth of its array of offerings and has also clearly formalized its services capabilities.
Some industry insiders observed cynically that in the past, Huawei didn’t always have products, but it had some foundational software libraries and specifications, which it rapidly customized to meet customer needs using its sizable development teams. Service providers would speak of entire buildings across the road from their offices that were staffed with Huawei engineers ready to customize a product in days or weeks where other vendors might take months or even years back at headquarters halfway across the world. It looks like these engineers are now available as one of the largest services teams in the world.
Huawei All the Way
Regardless of product or service, Huawei’s offerings on the show floor spanned from transportation and logistics to air travel and physical security, virtual security, edge computing, smart city sensors, and cloud data centers.
Here’s the crux, unlike other AI companies, Huawei doesn’t just make the AI chip and develop the software stack, it also owns the assets that can utilize the AI. It doesn’t need to convince developers to use its platform (though that won’t stop the company from trying) or convince a product manufacturer to include its AI chip. Just like the world turtle, it’s Huawei all the way down, and Huawei all the way up.
Huawei can, and will, embed its AI across all its products and services. For now, the company claims that its chips are only for use in its products. Regardless, as a fundamental technology that could benefit a large part of its solutions portfolio (though it will take time), AI can bring significant value to Huawei and its clients. Huawei has astutely recognized that investment in a fundamental technology like AI has benefits and ramifications across all aspects of its businesses.
One of the key elements in the rapid development of new technologies is that of the feedback loop. With its own products, resources, expertise, and services, feedback can come from within the company, reducing cycle time in its feedback loop and helping the company improve rapidly. And Huawei’s AI deployments across its various verticals will succeed more often than not by stint of sheer development might.
Now, Huawei still has to contend with mistrust by some western governments around the company’s true intentions and in this politically-charged world we live in, navigating all that will take lots of diplomacy and needle-threading. However, this vertically integrated company with extreme breadth is not only potentially the biggest manufacturer of AI chips and AI software, but it stands to be itself the biggest user and consumer of the same. And that’s what makes this so significant.
AI Fabric: It’s AI But Not AI
To wrap up, I’ll share a quick aside, one item that caught my attention in the sea of AI-infused offerings was a slightly misnamed product, AI Fabric. It turned out that AI Fabric was a networking innovation for Huawei’s data center switches that didn’t exactly use AI. However, it enables AI jobs to run faster.
Huawei’s development team has been making tweaks to buffer management, watermarking management and improving back pressure and congestion identification on its switches — not the most sexy technology when compared to glitzy AI. But by using this with Remote Memory Direct Memory Access (RDMA) over Converged Ethernet (ROCE), it provides a lossless fabric for AI jobs (and other high-performance compute workloads). It is a nuts-and-bolts type improvement from good-ole networking R&D that yielded a 40 percent improvement in AI training, and a 20 percent improvement in storage networks IOPS, which they demonstrated.
Simple, but real, concrete improvements are sometimes what I crave for in the glitzy AI-filled and software-defined world we live in today.
Image from: Prescott09/123RF