“We are religious against the HDD. We are an extinction-level event for the hard drive.” That’s the claim of Vast Data, a US-based flash storage startup that promises “exabyte-scale flash at the price of HDD”, according to Jeff Denworth, product VP at the company, who spoke in an online presentation for the IT Press Tour last week.

Vast’s idea is that all customer data in its products reside on bulk QLC flash, with some 3D Xpoint as a buffer. That way, they are not only able to benefit from high performance – that’s not necessarily the main aim – but are also very highly available.

Why is that important? What Vast has in mind is that, increasingly, customers want to access to all their data to run AI/machine learning operations – and that means access to everything on the same type of media.

The spin-off benefit is that customers don’t need to run numerous storage systems and suffer the inefficiencies that come with that scenario.

According to Denworth, the Gartner-inspired pyramid, with capacity storage at the bottom and the most performant at its apex, is a compromise that creates “bad behaviour” in the datacentre, with the need to use different storage systems and to migrate data between those tiers.

What Vast is doing, said Denworth, is to “turn the traditional storage pyramid” on its head. The aim is to provide storage on-premise that is as simple to consume as it is in the cloud and with random access to data that is demanded by AI/machine learning workloads.

In other words, said Denworth, “an all-flash archive” or “universal storage” that replaces all existing performance tiers and systems. It will “write at 3D Xpoint speed” while reading at TB per seconds throughput, with millions of IOPS.

“It will be cheap enough not to need other storage for transactional workloads, but can be used for anything,” said Denworth. “It is exabyte-scale storage at the price of HDD.

Vast Data hardware is built around NVMe-over-fabrics connectivity internally with 3D Xpoint and “low-cost flash” media. To hosts, it offers NAS and object storage access, namely via NFS, SMB and S3 protocols.

The heavy lifting is done in 2U nodes that each contain up to 675TB of QLC flash with 18TB of 3D Xpoint. Total capacity can be in the PB of usable capacity after data reduction.

These nodes are dumb units. Controller intelligence resides in Kubernetes-based containerised software storage servers that handle I/O requests, data migration between flash storage tiers, erasure coding, data reduction and encryption. Storage controller processing is done here, so if you want to scale performance, you add storage server nodes, up to 1,000 maximum with up to 10,000 containers.

According to Denworth, latency between servers and bulk storage is always less than 10 microseconds.

The container-based storage server approach, with the high-speed media used, leads to use of a stateless approach. No server holds cache, so there are no cache coherency issues or rebuilds required if a server goes down. Any scaling is handled by reproduction of Docker containers throughout the system.

Vast calls it “Disaggregated, shared everything”, or DASE.

Vast is available as all-hardware, all-software, or a combination of the two.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here