Artificial intelligence (AI) is an increasingly compute intensive process. Depending on the model, it can take a cluster of thousands of GPUs days, weeks, or even longer to train. But what if it didn’t have to? What if we built computers that worked more like our brains?
That’s the promise of an emerging technology called neuromorphic computing. And while it might sound like the kind of technobabble you’d see in an AI-themed SciFi thriller, it’s not as far fetched as you might think.
Several large chipmakers, including IBM and Intel, believe the technology could enable real-time training on sparse datasets all while consuming a fraction of the power of traditional compute architectures.
Silicon Synapses“The brain is just an incredibly, incredibly efficient signal processor,” Intel Lab’s Garrick Orchard told SDxCentral. “The human brain is estimated to consume on the order of 20 watts.”
For those 20 watts, the brain is able to seamlessly interpret all of the stimuli streaming in from your eyes, ears, nose, tongue, skin, while simultaneously managing your fine motor control and the various other processes that keep your body working as intended, he explained. “There is so much going on within a 20-watt power pack.”
Neuromorphic computing takes inspiration from the brain’s vast network of interconnected neurons and synapses, and attempts to replicate it in silicon. But while neuromorphic computers are inspired by the brain, that doesn’t necessarily mean recreating a brain in silicon.
“We need to be cognizant of the fact that silicon is just different from biological wetware,” Orchard said. “We’re trying to find the principles that we think are fundamental to the efficiency of the brain, and where it makes sense, apply those to our artificial process in silicon.”
Likewise, neuromorphic computes are architecturally quite different from the computer you’re reading this on.
At the most basic level, computers today process information using nano-scale logic gates called transistors, which control the flow of electrons through a circuit. Cram a few hundred of them into box and sync them up to a clock and you’ve got a rudimentary computer capable of basic math.
Today’s processors contain billions of transistors. The A15 processor powering Apple’s iPhone 13 packs about 15 billion transistors in the palm of your hand.
Neuromorphic computing is quite different explained Karl Freund, founder and principal analyst at Cambrian AI Research, in a recent article contributed to Forbes.
“Artificial intelligence platforms today use digital representations and math to create a rudimentary simulation of how the brain works,” he wrote. “The real world is not digital, and our brains certainly do not perform matrix multiplications to process input and output.”
Instead, the neurons in the brain communicate by sending spiked signals across synapses to each other, he explained. “The six senses create spiked input, which the brain then translates into spiking output to the muscles.”
Real World BenefitsWhile Neuromorphic computing remains a nascent technology, Intel and other’s early proof of concepts have shown promise. Earlier this year, the chipmaker’s Loihi 2 neuromorphic research processor made waves as the first chip to use Intel’s long-delayed 7-nanometer process node, which has since been rebranded Intel 4.
Aside from showcasing Intel's new manufacturing process, the chip is particularly effective at processing small-batch, low-latency AI models, especially where power is limited. One example is in autonomous vehicles, like drones or automobiles.
A drone controller may require several watts of power to operate and is relatively limited in the kind of real-time processing it can do, Orchard explained.
By comparison, a cockateel parrot’s brain consumes approximately 50 milliwatts of power and is far more capable. “It has a tiny little brain, but it can really navigate around these unknown environments at pretty high speed,” he added.
More importantly, the parrot can learn on the fly — pun intended — while the drone is constrained to a limited set of pre-trained scenarios, which may or may not be relevant to that environment.
Another use case is chemical sensing in industrial environments. Rather than having to collect a critical mass of data before running an AI model, neuromorphic processors can apply models to much smaller batch sizes. This include samples of just one, which Orchard claimed can dramatically reduce the time required to determine if a chemical is present.
“This is really the inspiration for this kind of work,” he said.
The Batch ProblemNeuromorphic computing isn’t going to replace CPUs or GPUs for every workload, Orchard said, adding that people often make the mistake of thinking that just because it's based on biology it must be better, and that’s just not the case.
“GPUs can actually do very well at processing a lot of data, but they do so in this batch paradigm,” he explained. “They batch up tens, or hundreds, or some large number of samples, and then they process them all at once so they can get high throughput and high-energy efficiency.”
The problem, he explained, is for small batch sizes or batches of one, GPUs aren’t very efficient.
“You’re navigating around the world, something happens, you want to process that data sample as quickly as possible and respond to it,” Orchard said. “You don’t want to batch up 100 data samples and then start processing. … This is one of the areas where we see a lot of promise for neuromorphic computers.”
In fact, because neuromorphic computers are asynchronous, they can process the incoming data immediately and then move on to the next sample as soon as it arrives rather than being constrained by a clock. Another advantage, at least for Intel’s Loihi, is the neurons are stateful, Orchard said. In other words they each have their own memory. “This is different from what you see in deep learning,” he added.
With traditional deep-learning models, data is processed as it is in that moment. Recurrent models might take into account historical data, but that’s an extra step.
“But with Loihi, if you’re getting very sparse input data arriving, they can be updating the network’s internal representation of the world around it,” he said. Because of this, we see neuromorphic computing “as being well suited to certain types of tasks and things along the lines of sensory processing.”
Where’s My Neuromorphic Car?For now, Neuromorphic computing is in its infancy. Intel’s Loihi 2 processors, launched earlier this year, are the chipmaker’s second crack at developing a hardware architecture around the technology. In the coming years, Intel hopes to extend dedicated neuromorphic processors to a variety of environments as well as bundling them as co-processors alongside traditional CPU and GPU resources.
However, bringing neuromorphic compute to the mainstream is a multifaceted problem, Orchard said. “One of the main things we’ve recognized as being a problem holding back the field is just the lack of convergence on a single software framework.”
“There are a few promising software frameworks out there, for example our previous one is owned by Intel — only available under an NDA,” he said. “Lava is open source and available for anyone to use and contribute back to. We’re specifically targeting our hardware with it, but other users can go out and target their own hardware with Lava as well.”
Freund expects Intel’s Lava software development kit and new Loihi processors to attract more developers and create new opportunities for advancement in the fledgling field.
Intel already counts roughly 150 members in its Neuromorphic Research Community, while IBM has seen similar success driving development with its True North processor.
“There remains a lot of work in software and future chips with more capacity for larger models, but I’m convinced that analog and neuromorphic computing will both have a significant impact on capability and, more importantly, affordability of large scale AI,” he said.