Hewlett Packard Enterprise’s (HPE’s) latest Superdome server is taking on real-time data analytics with in-memory computing.
“We’ve had this vision of in-memory compute — we call it memory-driven architecture,” said Jeff Kyle, VP of mission critical solutions at HPE. “It all started with the Superdome X system, working with SAP. Then we bought SGI. And now in 2017 we’re delivering Superdome Flex. You can turn data into actionable insights, at any scale, using industry standard hardware.”
In-memory computing gives all the processors in a system access to a pool of shared memory.
HPE first started working with SAP on an in-memory compute platform designed to run SAP’s in-memory HANA database back in 2014, with a project codenamed Kracken.
HPE last year bought big data analytics company SGI for $275 million in a move to strengthen its position in high-performance computing and data analytics.
The result of those two moves is Superdome Flex. It runs on Intel Xeon processors and has a modular design that scales from four to 32 sockets in four-socket increments. It provides a shared pool of memory over fabric that can scale from 768 gigabytes to 48 terabytes in a single system. This gives companies the needed compute power to handle large data sets and process analytics without slowing transactions, Kyle said.
“You look at the amount of data customers are accumulating in their environments — they want to act on this data and act on it fast,” he said. One way companies do this is by using in-memory databases, such as SAP HANA, Oracle Database In-Memory, and Microsoft SQL Server.
Superdome Flex gives companies this computing power, but Kyle says in-memory databases and analytics are “just the beginning of this platform for in-memory compute. Think: in-memory compute for AI and analytics at the same time,” said Kyle. “Superdome [will be] ready for that, probably by the end of next year.”