You pretty much had to expect artificial intelligence (AI) to come up at this week’s Amazon re:Invent conference, considering how hot a topic AI has become.
In fact, Amazon Web Services (AWS) kicked off the discussion early. CTO Werner Vogels wrote on his own blog last week that the company has decided to make MXNet its “deep learning framework of choice.”
There’s no obvious indication that industry competition played a role in AWS’ decision. But two open-source deep learning frameworks originated from cloud rivals: TensorFlow at Google, and CNTK at Microsoft. (In fact, CNTK now goes by the name of Microsoft Cognitive Toolkit.)
And it’s a good bet that artificial intelligence, at some level, will be among the re:Invent announcements this week, since rivals have begun moving into that space. In September, Microsoft announced the formation of an AI research group, and Google Cloud launched a machine learning group earlier this month. IBM, of course, has Watson.
It’s worth noting that not all artificial intelligence efforts are equivalent. Deep learning, specifically, is more sophisticated and more “intelligent” than machine learning. But it appears safe to say AWS, Microsoft, Google, and IBM are all researching multiple levels of AI, including deep learning.
Vogels cites three factors that helped AWS settle on MXNet. It scales well, meaning it can be used across many GPUs to, in a sense, build larger deep learning brains. AWS tested this with Inception v3, an image analysis algorithm, finding that MXNet scaled almost linearly with the number of GPUs being used. (Vogels cites scaling efficiency of 85 percent.)
But MXNet is also suitable for remote locations that might have modest computing power available — a possible nod to the Internet of Things (IoT) and the trend of pushing IoT compute power out to devices. MXNet can run on as little as 4 GB of memory, he writes.
Finally, AWS was looking for development speed — the ability to program easily using familiar languages.