This article is more than 1 year old

Is this the real life? Is this just fantasy? Self-processing flash drives, we'll need more capacity

OpenIO talks us through how it's applying its software to SSDs

Interview Will object storage using SSDs with embedded servers become a realistic storage/processing technology?

The idea of building disk drive-based devices as a cluster of self-processing storage nodes has been associated with Seagate and its Kinetic drives, and OpenIO and its nano nodes. Recently Huawei revealed it's developing NVMe-based SSDs with IP access and on-drive object storage.

El Reg asked OpenIO's product strategy head, Enrico Signoretti, some questions about what this might mean, wanting to find out if the idea was fantasy or had real prospects.

El Reg What do you think are the possibilities if the storage drive used by OpenIO becomes an SSD?

Enrico Signoretti Well, we are actually working with two different HW partners on this topic.

One of them is exploring the possibility to use NVMe drives and faster connectivity. The goal is to have a reference design ready for the beginning of 2018. We are working hard on it because demand for such a product is quite high. OpenIO is an open-source software company and we have the same approach with partners.

Not only HW vendors, though. Last month we worked with OVH SoYouStart on their dedicated ARM servers (conceptually similar to the nano-node) and we produced a white paper showing benchmarks and then potential of the solution. This work will lead to the availability of OpenIO SDS on ARM provided by OVH at a very good price full supported end-to-end.

El Reg OpenIO's SLS-4U96 appliances can host up to 96 nano-nodes, each one with a dedicated 3.5-inch HDD or SSD. This was back in December 2016. Has OpenIO done any nano-node development using SSDs?

Enrico Signoretti New SLS design will have more options (48 and 60 slots chassis designs could be available too). At the moment most installations are related to active archive use cases but we are working to add SSD and more CPU power to take advantage of our serverless computing framework (Grid for Apps) and run applications directly into the disk.

El Reg What are the ideas in favour of SSDs over HDDs and what are the ideas saying HDDs are better?

Enrico Signoretti The strategy of OpenIO is very clear and is based on two pillars. On one hand we work to reduce $/GB. Our object store is the key to do that and, at the moment, the HDD still has a better $/GB when compared to SSD. On the other hand, we work to improve $/data (meaning improving value of raw data stored in our system), and we do that with Grid for Apps. A more capable nano-node (with more CPU, faster network and SSD) will be perfect for this.

El Reg What applications might need the speed of flash-based nano-nodes and the scaling and on-drive compute of OpenIO SW?

Enrico Signoretti At the moment the HDD nano-node is good for traditional object storage use cases (such as active archives for example), but we want to replicate what we are already doing on x86 platforms on the nano-node.

Thanks to Grid for Apps, we have already demonstrated image recognition and indexing, pattern detection, data validation/preparation during ingestion and, more in general, data processing and metadata enrichment. With the right amount of CPU power we will be able to move most of these operations directly at the disk level, creating value from raw data while it is saved, accessed or updated.

El Reg SSDs are typically 2.5-inch form factor devices. Could a nano-node be shrunk to fit that form-factor or would the 2.5-inch SSD need a 3.5-inch carrier?

Enrico Signoretti Yes. Depending on the choices made by our hardware partners you'll see different form factors, also depending on the use cases and the density required by the customers.

El Reg With the maximum disk drive capacity being 12TB and the maximum SSD capacity being 15.3TB, or even larger with 30TB or 60TB devices from Samsung and Seagate, then does that move the capacity argument away from disk drives towards SSDs?

Enrico Signoretti There are no doubts, SSDs are the future, but the $/GB is still in favour of the HDD at the moment. It won't last long though, and I'm sure that sooner or later 3D-NAND QLC will start to become an interesting alternative to HDDs for almost all applications.

We already have customers in production with All-Flash infrastructures but the use case is very demanding in terms of performance, with thousands of emails stored and indexed every second, not exactly a traditional use case for object storage. With this I want to say that cheaper Flash will open the door to many more use cases for object storage and our software is already optimised to take advantage of it.

El Reg With flash technology moving to even greater capacity, with 30TB 2.5-inch drives a realistic possibility in 2018 using TLC 3D NAND, and 60TB or greater devices using QLC flash coming, does that skew the capacity argument further towards SSDs, because the cost/GB of the storage goes down?

Enrico Signoretti A 30/60TB drive will move the needle for sure. Density and power consumption are key factors for modern datacenter, especially for large installations. Even if the $/GB of the single SSD won't be lower than the HDD cost, larger SSDs will help to get denser and more power efficient systems which will drive down overall TCO.

El Reg OpenIO says a disk-based nano-node has a failure domain restricted to that disk. Suppose we move from a 12TB disk drive to a 60TB SSD what would that mean in failure domain terms and what would be the effect?

Enrico Signoretti Well, everything is getting bigger, isn't it? Many Flash vendors are actively working to squeeze 1PB of flash in a single rack unit (recently announced 128TB drive from Samsung or the 30TB "ruler" from Intel are just a couple of examples).

What will happen if a 1U/1PB node will fail? This will impact the entire cluster in a similar way a smaller fail in size impacts HDDs today. With the nano-node the failure domain is still contained and the huge amount of distributed CPU will help with the rebuilding operations (erasure codes are more complex than XOR operation usually found in RAID5/6).

El Reg Having NVMe access to an SSD-based nano-node would mean an NVMe port and end-point on the drive. What would that mean in terms of expense and complexity?

Enrico Signoretti We are not there yet. We are still thinking in terms of traditional storage interfaces and Ethernet connectivity. NVMe will be the next step but at the moment it would be too expensive and I'm not sure it is necessary. At the end of the day we are still focusing on capacity-driven and not latency-sensitive applications.

El Reg What might be the cost/node of an NVMe-based flash drive nano-node system compared to an existing disk drive-based system?

Enrico Signoretti This one is a good question too, but the cost of the compute part of the nano-node today is 1/4-1/5 of the disk. I suppose that we will have to match the same level of cost for an NVMe-based nano-node before thinking about it. That said, for some applications we could think about nano-nodes with more CPU and RAM for compute intensive applications.

El Reg Suppose I wanted a facial recognition system built from NVMe-based flash nano-nodes. How might OpenIO broadly specify such a system?

Enrico Signoretti There are several ways to implement nano-nodes from my point of view, but the interesting story here is that the nano-node is an independent computer at the end of the day. A remote camera could have one or more nano-nodes storing all the video stream, doing operations locally (like face recognition, removal of useless parts, and so on) and send to the core only relevant information (with metadata included in it). All data is saved locally, but only relevant information is moved to the cloud.

By operating in this way, you can save a huge amount of network bandwidth while removing all the clutter from the central repository, which also results in faster operations and less storage costs in the cloud. This is an advanced application, but it is a game changer.

In fact, without this approach you solve one of the most critical issues of IoT by doing some data processing while data is created, limiting data transfers only to relevant information... a level of efficiency that is beyond data compression or bandwidth optimisation.

This is also why we are not only talking with cloud and enterprise customers about the nano-node, I can't tell you more now... but IoT is another field we are looking very closely.

El Reg A 96-drive SLS-4U96 with 15.3TB SSDs would have a 1.47PB capacity. With NVMe access over fabrics we would be looking at less than 100 microsecs access time. The on-drive microservers would process incoming video and apply facial recognition technology. How fast would such a system respond with a match against a list of stored faces of interest compared to other systems in use?

Enrico Signoretti I do not have an answer for that yet. I'm sure this will be fast, though. But this is only one of the many applications we are researching. For example, we are working with a partner to do office document analysis through ML/AI.

It means that we could be able to recognise the value of a document and add relevant information to it when it is stored. One possible use case is the possibility of finding documents that are not well formatted or compliant with laws and regulations (to avoid lawsuits, etc.) but also to find the right set of documents when you need them, only by searching the subject and what you expect to find inside them. This will change the concept of data lake, making it smarter and finally useful!

El Reg Let's scale this system. A rack with 10 SLA-4U96 enclosures and 15.3TB SSDs would hold 14.7PB. Lets double the drive capacity to c30TB and we're looking at a near 30PB rack of NVMe-accessed, SSD-based nano-nodes. What applications do you think could use such a system?

Enrico Signoretti We are already working on POCs with SLS-like systems on projects starting at a minimum of 10PB already. At the moment the HDD in the nano-node is limiting the range of applications because the lack of IOPS.

As soon as Flash will become a viable option in terms of $/GB for capacity-driven applications we will be ready to leverage our serverless computing framework to run more applications closer to the data. Real time video encoding, AI/ML, IoT, real time data analytics are all fields we are looking very closely and we will share more on this in the following months.

El Reg Are these questions sensible? Or am I missing the point here?

Enrico Signoretti The questions were all good to me. From our point of view, even if we are growing very well and object storage is what pays the bills today, the future is not in $/GB (a race to the bottom) but in the $/data we will be able to create for our customers. This is why we are investing in our serverless computing framework and the nano-nodes.

TL;DR

The concept of sets of self-processing flash drives has real prospects which will depend on high-capacity, relatively low-cost flash drives appearing. While there are some applications appearing now, this is very much corner-case territory and the bleeding edge of technology. A combination of Huawei-type HW technology and OpenIO SW looks promising. Let's watch this space. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like