Hitachi Data Systems updated its object storage technology, adding more storage capacity and improving file sharing and data management.
The Hitachi Content Platform (HCP) portfolio is the company’s object storage play. It allows enterprises to securely store, share, and manage content. Customers can then access this content from anywhere, on any device.
This technology manages data as objects, as opposed to file system storage like network-attached storage (NAS) or block-level data storage like storage area networks (SAN). Amazon Simple Storage Service (Amazon S3) is probably the best-known example that lives in a public cloud; Microsoft Azure and Google offer their own public cloud object storage as well.
In addition to software-defined object storage, Hitachi Data Systems’ portfolio also provides file synchronization and sharing capabilities (think: a secure, private Dropbox) and data analytics.
“HCP is our object store that has this rich mega-data but also a lot of intelligence and automation built in,” said Tanya Loughlin, director of content cloud and mobility product marketing.
It also integrates with public clouds, said Tim Desai, Hitachi Data Systems senior product marketing manager. “We have the broadest support for public cloud integration of any object storage solution,” he added. “Most only connect to Amazon and now Azure. From the get-go we offered connection to Amazon, Google, Azure, whatever.”
Hitachi Content Platform
The portfolio includes four products: the Hitachi Content Platform for software-defined object storage; HCP Anywhere for file synchronization and sharing; Hitachi Data Ingestor, an elastic-scale cloud storage gateway; and Hitachi Content Intelligence, which provides data analytics.
With this new release, the company says HCP gains a 400 percent increase in usable storage per cluster, 67 percent more storage node capacity via 10TB drives, a 55 percent increase in objects per node and simplified software licensing.
Other updates to the software-defined object storage include:
- Improved multipart file transfers to speed large file uploads, downloads and range reads for Amazon cloud storage applications.
- Improved APIs to give organizations greater visibility to analyze their infrastructure performance.
- Added geo-distributed erasure coding for multisite deployments, which the company says results in lower capacity overheads, reduced costs, and faster rebuilds.
- It restores corrupted files from storage across geographies, rather than from redundant sites. This maintains availability during adverse events, such as a large-scale outage.
The company also announced updates to its file sync and sharing product. These new features include multipart file transfers to HCP for faster uploads of large files such as videos.
In a blog post, Hubert Yoshida, CTO of Hitatchi Data Systems, said the product updates can help enterprises advance their digital transformation. “The key to success will be the ability to free the data from their legacy silos, and use them to augment new sources of data that can be analyzed to create new business opportunities,” Yoshida wrote. “Object storage with its rich metadata capabilities, open interfaces, and scalability can eliminate these silos.”