This article is more than 1 year old

PernixData chap: We are to storage as Alfred Nobel was to dynamite

Snazzy memory tech demolishes DRAM volatility restrictions, says chief techie

Interview PernixData chief technologist Frank Denneman thinks distributed fault-tolerant memory technology (DFTM) is ushering in an era of nanosecond-scale storage access latencies that could fundamentally change applications and the way they access data.

Application process run speeds could be reduced to a tenth of their present levels or beyond and servers could handle many more virtual machines.

We talked to Frank the Evangelist to better understand his point of view.

El Reg: Set the scene for us.

Frank Denneman: In the enterprise, RAM's inability to retain its contents in the event of power loss has precluded its use for primary data storage, despite its high-performance characteristics.

But if it could be harnessed, if the data loss issue could go away, if its volatility could be subjugated in the same way that Alfred Nobel made nitroglycerine’s volatility controllable with his invention of dynamite, then applications could avoid paying the storage access wait tax and run so much faster.

Look, massive memories are bringing storage changes upon us, right now.

El Reg: How so?

Frank Denneman: The current generation of Intel Xeon processors are able to support up to 1.5TB of memory each. In true “virtuous cycle” fashion, VMware recently announced support for up to 12TB of RAM per [8-core] host in its flagship product, vSphere 6, to take full advantage. We’ve seen Independent Software Vendors (ISV) make a concerted effort to harness the potential of this boosted DRAM resource to increase application performance.

El Reg: How have they done this?

Frank Denneman: They have developed memory-caching libraries, in-memory applications and distributed fault-tolerant memory. Memory-caching libraries and in-memory apps have constraints which affect IT operations and services.

El Reg: Such as what? What is the limitation of memory caching?

Frank Denneman: Although applications using memory-caching libraries are able to utilise vast amounts of memory to accelerate data access, processing the application has to be specially tailored to use the libraries. Existing applications, for example, can’t use the technology without a lot of coding changes. This, clearly, isn’t a walk in the park and limits its reach.

Frank Denneman from his blog

El Reg: But we have in-memory applications like SAP HANA appearing and they've been well-received.

Frank Denneman: Yes, agreed. Some vendors, like SAP, embraced the large-memory trend early on and did the heavy lifting for their user base. Did they solve the problem of the volatile nature of memory? Unfortunately not. For example, although SAP HANA is an in-memory database platform, logs have to be written outside the volatile memory structure to provide ACID (Atomicity, Consistency, Isolation, Durability) guarantees for the database transactions to be processed reliably.

Next page: Resolving issues

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like