TidalScale raised $24 million in a Series B funding round for its Software-Defined Servers product — commodity hardware running the company’s software that pools the servers’ resources into a single virtual system that can be scaled as needed.
Using software, the technology can create a single system with up to 64 terabytes in-memory performance. This on-the-fly capacity means companies can right-size their servers in on-premises data centers and in the cloud to fit huge workloads and compute-intensive data sets.
It’s a type of composable infrastructure that focuses specifically on computing resources, said Chuck Piercey, co-founder and VP of marketing at TidalScale.
“What TidalScale brings is a composability to the compute element specifically,” Piercey said.
Composable infrastructure treats networking, storage, and compute as fluid resource pools that can be composed and recomposed as needed. But Piercey said, “If one zooms in on the compute piece of that, it is a single compute server, which ultimately becomes limiting. TidalScale can compose what appears to be a single compute server by combining multiple discrete servers from a server-node buffet. TidalScale can exist as the compute element in a broader composable framework with storage and networking, making the overall solution much more scalable and flexible.”
The company’s Series B funding brings its total to $38.8 million. Bain Capital Ventures, Hummer Winblad, Sapphire Ventures, Infosys, SK Hynix, and an unnamed “leading server OEM” participated in the new round.
It plans to use the funding to increase Software-Defined Server adoption and further invest in its technology. “The secret sauce is this machine-learning layer that is auto-tuning that workload, and there’s a world of investment to do there because you’re teaching the machine to tune itself,” Piercey said.
Big Memory on Cheap Servers
The startup, headquartered in Campbell, California, and founded Jan. 1, 2013, got its start as a “technical daydream,” Piercey said. TidalScale co-founder and CTO Ike Nassi was the EVP and chief scientist at SAP when the category of in-memory databases was established. Nassi wanted to develop a system that could automatically split big-data workloads across machines. “He said you can do this under the covers and let the machine fine-tune itself,” Piercey said. Nassi retired from SAP, but “he still had the itch.”
TidalScale rolled out of this dream. And it means companies don’t have to buy huge numbers of servers to run in-memory databases like Oracle Database and SAP HANA, as well as simulations and analytics workloads, Piercey said. Oracle and SAP are also technical partners and a major part of the company’s go-to-market strategy.
“What TidalScale does is give you a way to do big memory on cheap servers, and you don’t have to rewrite the application,” Piercey said. “It’s worth having in your tool kit as a cheap way to do scale up.”
The key to this is the company’s HyperKernal software. The software binds multiple physical servers into a single virtual system and enables right-sizing servers on the fly. It allows customers to run an unmodified guest operating system with no changes on the existing application. And once the system is up and running, the machine learning layer continuously optimizes performance.
The company’s technology targets organizations needing huge amounts of memory for workloads including research analytics, simulations, and genomics. Customers include University of Texas San Antonio, IoT platform provider Sirqul, and NCS, which uses a predictive analytics engine to provide banks and regulatory agencies with real-time monitoring and alerts for cash-intensive businesses.