nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

IT admins hate this one trick: 'Having something look like it’s on storage, when it is not'

Memory... lights the access speed of RAM. (Or does it?)

By Chris Mellor, 11 Oct 2017

Debate An argument about how to solve the same technical problem has sprung up between two rival startups with plenty of reason to say the other's tech is not up to scratch. But they raise some interesting issues about how to solve slow access to moved files, where to store metadata, and more.

How best to archive files yet preserve ready access? There are differences of opinion between using symbolic links or in-memory metadata, with Komprise using the former and infinite-io using metadata in memory.

Infinite-io's CEO Mark Cree recently took issue with Komprise's view of how to solve slow access to moved files by replacing stubs with symbolic links.

Komprise co-founder, president and COO Krishna Subramanian quickly responded with ripostes to infinite-io's assertions.

(Some of the answers have been edited for brevity).

Mark Cree: I applaud Komprise for getting a product to market quickly... The real issue is leaving something behind that’s not the real data no matter what you call it. Either way, you run into problems:

  1. The space savings between a stub and a symbolic link [are] almost irrelevant.
  2. IT admins hate having something look like it’s on storage, when it is not. It makes it extremely hard to do triage when a disaster happens.
  3. Scans are outdated before they finish.
  4. These types of solutions don’t scale well and kill NAS performance.

Komprise on 1: Space savings between a stub and a symbolic link

Krishna Subramanian: I think he is missing the point – we are not replacing a stub with a symbolic link because it takes up less space.

There are two reasons customers hate stubs – the first is that stubs are proprietary and so you need either storage agents so each storage can understand the stub or some proprietary interface to each storage. So they are not portable and managing revisions of stubs along with storage upgrades or migrations is a nightmare.

Krishna_Subramanian

Komprise co-founder, president and COO Krishna Subramanian

The second reason is that stubs are static and point to the moved data – it would be like you had only one map to get to your data and that map was in the stub. So if that stub is corrupted or deleted for any reason, your data is orphaned. So stub management is a nightmare and often needs a database to be backed up.

Komprise eliminates both these issues by using dynamic links to create an open, standards-based cross-storage interface that is resilient to failures.

First, a link is a standard construct that the file systems understand, so no proprietary interface is required. We use links not to save space (over what is used by stubs) but to move data transparently without proprietary, restricted approaches such as a stub.

Since the advent of the [Windows] XP operating system, both SMB and NFS file systems support symbolic links. With that development we are now able to use a standard construct that a file system understands and supports to transparently forward an access request for an archived data to Komprise.

Second, unlike other stub based approaches, we don’t store the context in the stub. With those approaches, you lose the stub, you lose access to the moved file. Komprise maintains context internally and within the target storage. Thus if a stub is deleted it can be recreated, assuming it was an inadvertent deletion.

2: IT admins hate having something look like it’s on storage, when it is not

Krishna Subramanian: We’ve heard just the opposite! IT administrators are unable to determine what data to archive since they are not the owners of the data and so today they must ask permission of the users and of course users never want their data moved and so nothing is moved.

With Komprise, they don’t need to ask. They can move data based on IT policies and the data is still accessible and visible to the user so that they can operate on it if needed.

We’ve found that any time you rely on humans/users to do something it never works. This approach bypasses this crucial road block.

We provide the option to give a visual indication that the file is indirect or make it fully transparent – but almost all our customers choose the fully transparent path.

3: Scans are outdated before they finish

Krishna Subramanian: Yes, they are … for hot data! Our success has always come from mapping the appropriate technology to the use case at hand. When moving data that is say over 6 months old (and we are finding that on average 50 per cent of the data on primary storage is over 1 year old) by doing an adaptive scan that runs in the background without interfering with active work we find that the file servers are not impacted.

We also find that during that scan period maybe 0.01 per cent of the files cross the threshold and is now 6 months old. We catch these on the next scan and move the files then. Since we are dealing with cold data we do not need to be real time and this eliminates unwanted overhead on the source file servers that is disruptive. Had we been fronting the data and providing meta data access to hot data, this approach would not work.

We let existing companies (e.g. NetApp, Pure, EMC) who are good at managing hot data manage hot data. We provide a risk mitigated approach that our customers really appreciate.

4: These types of solutions don’t scale well and kill NAS performance

Krishna Subramanian: He might be thinking of legacy client-server solutions that are limited by central bottlenecks such as databases and so have trouble scaling.

We are a fully distributed scale-out architecture with no central bottlenecks. And we don’t kill NAS performance because we run in the background – traditional approaches run in the foreground and so they disrupt active usage.

We are like a housekeeper of data – just as you would not want your housekeeper clearing dishes while you are eating dinner, Komprise adaptively backs off and runs non-intrusively in the background when the file servers are in active use or the network is in active use.

Our typical customer manages petabytes of data across 10,000+ shares involving several hundred million files across file servers and we scale seamlessly without customers having to set any special QOS policies or managing our environment.

El Reg: How does infinite-io define the value of metadata?

Mark Cree: Komprise seems to totally miss the value of metadata. The model that does back-end metadata scanning is flawed and slow to recall data. Since the metadata is constantly changing as users access files, a static scan is obsolete before it even finishes. You’re likely to get a lot of false positives on file migrations that will lead to file ping-ponging as active files get migrated and then need to be brought back.

On metadata

Krishna Subramanian: Again, he has completely missed the boat. What he is saying makes sense if you are managing ALL of the metadata … hot and cold … as infinite-io does. For his solution what he is saying is indeed correct. It does not apply to us.

Infinite-io does an initial scan to create the metadata and I assume while scanning it is sniffing the network for any metadata changes. In the process it creates a metadata server that then is the central point through which all data transactions occur. If infinite-io goes down, you lose access to all of your data. It is fronting a customer’s data .. hot and cold data. We find this to be a very high risk approach.

Mark Cree: The real value of metadata, in our opinion, is to enable the management of vast amounts of data AND to maintain the performance for both active data and archive data as they grow.

Krishna Subramanian: We fundamentally disagree. We feel hot data should be managed by primary storage which the customer bought to manage hot data. We see the primary storage as a large cache of all the hot data. Over time, 99.999 per cent of the data will be cold and that will be on capacity storage. Komprise manages all of that cold data and provides ways to transparently access, search and potentially restore that data as needed.

When cold data is accessed it is cached on Komprise thus providing fast access. If the access exceeds some custom policy, the data is re-hydrated back onto the primary storage. This approach allows us to leverage the primary storage as a cache for hot data. As a result, we do not require extensive, super fast and expensive hardware.

El Reg: So how do you you process metadata?

Mark Cree: At infinite-io we take a different approach. We install like a network switch in front of installed storage. Our product is totally transparent to all installed apps and hardware, making us easier to install and maintain. We do a one-time scan of all metadata and put the results in DRAM in our platform.

Mark_Cree

Mark Cree

The metadata is then kept up to date by watching network activity. In fact, we actually learn metadata from the network while performing the initial scan. Since we have all the metadata, and it is continually being updated in real-time by watching network traffic, we don’t need or use stubs. We know where everything is and simply redirect the request at the network-level.

Krishna Subramanian: As stated above, they front all data. If they go down, what happens to the access to the customer’s data? When they come back up, how long does it take to replace their stale metadata with fresh meta data?

I liken this to a housekeeper who tells you, “I will keep you house totally in order and neat, but there is only one catch – I will tell you and your family when to wake up, when to eat your meals, I will watch everything you do, and as long as you abide by these, you will be ok.”

Would you hire this housekeeper?

This is the problem solutions like Acopia had and why network-level data management has not worked.

El Reg: Does this have an effect on data access time and performance?

Mark Cree: Since metadata requests make up 80 per cent more of most workloads, having the metadata in DRAM allows us to dramatically enhance the performance of any NAS system(s) behind us. We serve metadata on average at 65 microseconds directly on the network, totally off-loading the NAS system(s) behind us. The fastest SSD-based NAS systems today generally respond to metadata requests in the 500 microsecond to millisecond range – yes, we can make a NetApp appear 5x-10x faster.

Krishna Subramanian: I would agree with this. Back in the day the metadata chatter was killing NAS file servers (FS). There were many “metadata” servers geared to take up that chatter thus freeing FSs to do what they do .. read/write files. They are not in business today. FSs have solved this problem with fast SSDs. While infinite-io may have still faster SSDs, it comes at a cost and it is in the path of hot data. Why would a customer buy expensive primary storage to front it with expensive network layer metadata servers?

El Reg: You say there is a public cloud access angle to this. What is it please?

Mark Cree: Where this gets really interesting is with the cloud migrated data. We give our customers the tools to create effective cloud migration polices. With them, they rarely need to recall data that has been migrated to a cloud, usually less than 5 per cent of the time.

Even better, of the 5 per cent we may need to recall, 80 per cent of those recalls are requests for metadata. In those cases, infinite-io can intercept that metadata request on the network and respond out of DRAM making the cloud faster than a flash array and rarely requiring an actual file recall from the cloud. If you are going to a public cloud this dramatically reduces in and out file charges.

A system that scans data on the back-end has no way to performance-enhance anything. It’s usually the reverse, the continual scanning slows overall system performance.

Krishna Subramanian: To me this makes little sense. But this paragraph does say what we’ve been saying. Cold data that you migrate is rarely accessed. (In fact their 5 per cent seems quite high. We’re not seeing such high numbers).

In my mind, it does not make sense to risk existing infrastructure by putting in expensive fast hardware resources to accelerate access to data that is rarely accessed! Furthermore, the latency is not just in accessing the metadata; the bigger issue has to do with accessing the content and infinite I/O does not address this bigger issue.

Komprise will cache data accessed from the cloud and reduce further access to the cloud thus reducing costs and providing on-premise access latency. It will re-hydrate that data onto the primary storage based on custom policies to further improve access latency and reduce cloud egress costs.

Their statement that a system that scans data on the back-end does not improve performance and actually slows things down is correct only if that system is designed incorrectly – if it gets in the way of active data usage, and if that same system simply scans data and sits on its hands and does nothing. We are an adaptive analytics-driven scale-out data management solution designed to optimize handling of cold data non-intrusively across storage using open standards.

What about the tech?

We've heard a lot about how Komprise's tech works in Subramanian's rebuttal, so we asked Cree some questions about infinite-io's offering to get the bottom of how it plans to solve the same problem.

El Reg: If we think of infinite-io as a storage metadata reference layer, then how would you compare and contrast it to Primary Data’s technology?

Mark Cree: Primary Data and infinite-io both rely on live metadata to intelligently migrate files to a cloud or appropriate storage tier. Primary Data virtualizes NAS systems, servers, and clouds into one or more new name spaces. Virtualization is accomplished through a complex mapping process that requires workflow changes as new mount points and/or drive letters are introduced. [It's] similar functionality to companies ... like Polyserve, Acopia, and most recently Formation Data.

Primary Data also provide metadata acceleration, but [its] performance is limited by the layers and IO’s it must transverse to get to the metadata and then serve it up.

At infinite-io [there are] absolutely no workflow changes and [we tier] data to a cloud like Primary Data, but we do this totally transparent to installed applications and users.  We may have moved 90 per cent of a customer’s data to a low-cost cloud or storage tier, but to the end-user or application, the data appears and responds just as if it were still on a local NAS system. ...

We don’t need to traverse through a file system to respond, we see the request on the network and respond directly out of memory.

El Reg: If we think of infinite-io as a front-end storage array accelerator, then how would you compare and contrast it to Avere’s FXT technology?

Mark Cree: Avere and infinite-io both install in front of existing NAS storage and accelerate metadata, but the similarities stop there. Avere virtualizes the NAS systems and clouds behind them into one or more new name spaces using a complex mapping system similar to Primary Data.

Avere operates like a cache, and requires a “warm up” period before they can effectively speed up metadata and file operations. In addition to metadata, they can cache files. Avere’s performance, like [that of] Primary Data, is throttled by the fact that it uses a file system to respond to metadata request. ... It’s a lot of layers and IOs just to get to the metadata and their current implementation can’t get down to double digit microsecond performance serving metadata or files.

Unlike Avere, we’ll never have any metadata cold spots or misses since we store all metadata in memory.

El Reg: We can see how infinite-io use would be beneficial when accessing data in disk drive arrays but, surely, it’s less beneficial with faster all-flash arrays, where metadata accesses would be fulfilled faster. Are we wrong?

Mark Cree: Our metadata response times are record setting at 65 microseconds on average.  We had the CTO of one of the major storage vendors tell us we are 5 times faster than anything they sell and 3 times faster than any in-memory file system they have simulated responding to metadata requests.

So, yes we can make all-flash NAS arrays faster. ... The secret sauce to our performance comes from being on the network and not having to traverse through the complexity and layers of a file system to respond to a metadata request.

El Reg: What role would infinite-io play if the shared storage resource was an NVMe-over-Fabrics-accessed array?

Mark Cree: We don’t really play directly in the block storage market where NVMe is targeted. Infinite-io is focused on unstructured file data which most analysts predict is growing more than 5 times the rate of structured block data. Having said that, if the NVMe-over-Fabrics array was front-ended with a file system we would treat that file system like any other NAS system and perform inactive data migration and metadata acceleration.

El Reg: What role does infinite-io have in a data centre built with hyper-converged infrastructure appliances, where the shared storage us distributed between the (server) nodes?

Mark Cree: This is an interesting application and we’ve had some discussions with one of the large hyper-converged vendors. Most of the hyper-converged systems store VM files on a file system. Under that model, we would be able to tier cold VM’s on the hyper-converge system to a low cost storage tier like a cloud while making them appear and recall just as if they were local; increasing the hyper-converged system’s performance by freeing local storage and lowering inactive VM storage costs.

El Reg: Would infinite-io have a role to play in the public cloud if the storage array was front-ending was a software implementation in AWS or Azure, for example?

Mark Cree: The infinite-io system is based on standard x86 code. We could theoretically run it on any hardware with enough horse power to give the expected data throughput rates. Being a startup, we’ve focused on on-premise customer solutions, but there is nothing from a technical perspective that would stop us from front-ending storage that was a software implementation in a cloud like AWS, Azure, Google, IBM/Softlayer, or Virtuestream.

+RegComment

Should you use infinite-io or Komprise to tier your data and move cooling data to cheaper, slower storage tiers and, ultimately, to the public cloud?

Komprise does not disagree on infinite-io use for hot or primary data but after that both have differing views, views that perhaps only a comparative benchmark run could produce real world data with which to prove or disprove.

It would make sense, perhaps, to have pilot installations if your shortlist choice included both of these suppliers. ®

The Register - Independent news and views for the tech community. Part of Situation Publishing