This article is more than 1 year old

HPC storage bods at DDN fish for BIG CATCH in the data lake

A product marketing manager gives his view

Interview Suppliers of storage and analytics systems to the so-called "big data" lake market are enduring turbulent technology transitions and fast-changing requirements, as we saw here.

Startups with $100m-plus funding are common and one, Cloudera, has surpassed $1bn in funding, as VCs and startup execs scramble to find their way through the big data lake to the glittering prizes they see in prospect.

HPC and supercomputer storage company Data Direct Networks (DDN) says its WOS object storage platform is at the opposite end of the cost spectrum from GRIDScaler, and is architected to deliver a very low TCO, while still providing high performance with massive scalability and data durability. With that in mind we asked Lance Broell, WOS Product Marketing Manager at DDN, a set of questions.

For sure, Lance is putting DDN's best foot forward, as we expect, but there's also a lot of insight into what’s happening in the big data market. We can discern details of the undercoat and construction underneath the glossy paint job.

We have edited Lance’s answers for concision.

El Reg: At massive, tens of petabyte scale, small $/GB differences in price can increase to huge $/PB differences. For example, a 500TB array at a hypothetical $3/GB competes with another at $3.01/GB. Smallish difference. At 50PB it becomes a huge difference in overall price. How will you and other object storage vendors cope with that?

Lance Broell: To deliver the best solution for the best value when talking about large scale deployments requires very solid understanding of the customer needs and careful architecture of everything involved that will make it the most efficient solution possible.

Sales price is important but is just a portion of the total picture that really comes to play at scale. Vendors need to have great field technical people to work through [architecture, system efficiencies and software tools] with customers; otherwise, the TCO of the solution will grow out of control.

When talking to object storage customers, we find the landscape is really divided into two camps:

  • Those that understand that object storage can be a feature rich solution architected to address a broad set of use cases, including: multi-site collaboration, global content distribution, disaster recovery and integration with HPC storage platforms. The feature here is solving real business issues at a lower $/GB and TCO in a way that can’t be done by the traditional storage system/players.
  • Those that take a narrow view and implement systems that rely on distributed erasure coding to create a primarily low cost solution with few features, low performance, and which are highly network dependent (latency impacts/limits performance). In short, the only feature here is $/GB. These are typically the smaller data set size customers or the ones who are just beginning to “kick the tires” and have not started looking into the rest of the economic impacts.

That’s the long way of saying that innovation and addressing business needs is the answer to the $/GB race to the bottom.

More about

TIP US OFF

Send us news


Other stories you might like