Hitachi Data Systems (HDS) has introduced an updated version of Hitachi Content Platform (HCP) and a new product called Hitachi Data Ingestor (HDI). Last week, I had a chance to have a briefing on these two new offerings. These hit the wire today, here is HDS’s press release for the news.
Here are the key factors of of HCP 4 and HDI:
- This is targeted to the large enterprise for private cloud scenarios
- This is targeted to the cloud provider/co-location provider that doesn’t write their own APIs
- This integrates with existing HDS storage products, including the newly announced Virtual Storage Platform (VSP).
- Up to 40PB of usable capacity in a single physical cluster (VSP supports up to 255 PB, however)
- Technical features (Copied from pre-release materials):
- Intelligent objects
- Compression and deduplication
- Encryption of data at rest
- Compliance and retention
- Built-in protection, preservation and replication
- High Availability architecture
- Continuous data integrity checking
- Advanced replication and disaster recovery
This figure captures HCP in a quick picture well:
While HCP is one technology in the ever-changing cloud technology environment, what really caught my eye was the new HDI component. Basically, HDI is a cluster (initially/oddly of two Dell Servers) that provide CIFS and NFS endpoints to an HCP engine. This is entirely multi-tenant aware as well. The figure below shows a few features and how HDI fits into the HCP environment:
The small cluster component of general purpose servers for HDI also functions as a cache. There is 4TB of disk space in the HDI component to provide increased performance on the front-end of HCP. There are other features within HDI such as replicate on write and a network of stub-pointers to HCP.
I’m working to get a screen shot of the CIFS/NFS engines for HDI – as I believe that if that is an intuitive and seamless interface, the potential could be huge. More info is at the HCP homepage.