StorPool is a next generation data storage software. It pools the attached local storage (hard disks or SSDs), of standard servers, to create a single pool of shared block storage. StorPool works on a cluster of servers in a fully-distributed, shared-nothing architecture. All functions are performed by all of the servers on an equal peer basis. It works on standard servers.
The software consists of two parts – a storage server (target) and a storage client (driver, initiator), that are installed on each physical server (host, node). Each host can be a storage server, a storage client, or both (i.e. a converged set up, converged infrastructure). To storage clients, StorPool volumes appear as local block devices under /dev/storpool/*. Data on volumes can be read and written by all clients simultaneously and consistency is guaranteed through a synchronous replication protocol. The StorPool client communicates in parallel with all of the StorPool servers. Since our latest release we support other operating systems/hypervisors as clients of the StorPool storage system, through iSCSI. For the purpose we have developed scale-out & highly available iSCSI target.
StorPool provides standard block devices. One or more volumes can be created through the StorPool JSON API or CLI volume manager. Redundancy is provided by multiple copies (replicas) of the data, written synchronously across the cluster. Users can set the desired number of replication copies.
StorPool provides a very high degree of flexibility in volume management. Every disk that is added to a StorPool cluster adds capacity to the cluster, not just for new data, but also for existing data. StorPool does not impose any strict hierarchical storage structure that links and reflects to the underlying disks. It simply creates a single pool of data storage (global namespace), that utilises the full capacity and performance of a set of commodity drives.
In StorPool, redundancy is guaranteed through a synchronous replication algorithm. This can be thought of as a very advanced software RAID between servers and racks. Consistency is guaranteed by end-to-end data integrity checking. Typically, data needed by one StorPool Client is located on drives, which are located in all of the Servers in the cluster. This layout provides high performance and real time load balancing. Data placement and replication is independently selectable for each volume.
As mentioned above, in our latest release we added support for various other hypervisors, including VMware vSphere/ESX/ESXi, Windows Server & Hyper-V and others. This hypervisors access StorPool’s shared storage service through highly-available and scale-out iSCSI connection. This approach has the advantage of high level of compatibility with every system, which supports iSCSI. Limitations of the first version of this implementation is that StorPool cannot run hyper-converged with these operating systems/hypervisors.
Use of the SDxCentral service directory is governed by our Terms of Service, including without limitation those sections under the headings "CONTENT", "LICENSING AND OTHER TERMS APPLYING TO CONTENT POSTED ON THE SDXCENTRAL SITES", "INDEMNITY; DISCLAIMER; LIMITATION OF LIABILITY" AND "COPYRIGHTS". Under no circumstances will SDxCentral be liable in any way for any Content, including, but not limited to, liability for any errors or omissions in any Content or for any loss or damage of any kind incurred as a result of the use of any Content posted, emailed or otherwise transmitted via the Sites.