This article is more than 1 year old

Facebook's Open Compute could make DIY data centres feasible

Fitting in never looked so good

DIY vs COTS: Part 2 Last time I looked at the PC versus console battle as a metaphor for DIY versus Commercial Off the Shelf (COTS) data centres, and touched on the horrors of trying to run a DIY data centre.

Since 2011, however, we've had the Open Compute Project, initiated by Facebook. The ideal is some kind of industry-standard data centre, with OCP members agreeing open interfaces and specs.

Does Open Compute shift the DIY data centre story back in favour of build and against buy?

The PC-versus-console metaphor is relevant to an examination of Open Compute. Of particular note is that after the dust had cleared, the PC gaming market settled into a sense of equilibrium.

Developers wanted the ability to push the envelope farther than they could with consoles, but weren't prepared for the ever-increasing investments necessary to continually develop games for the bleeding edge when there was so much easy money to be made with consoles.

PC gamers, obsessed with driving down costs, eventually figured out how to math hack the exact price/performance sweet spot for each and every generation of hardware. This resulted in manufacturers focusing their efforts on this sweet spot, engaging in price wars and ultimately the creation of the gaming notebook.

For all intents and purposes, gaming notebooks are homogenous. They're all more or less the same, regardless of vendor. As such, each generation is functionally a gaming appliance: developers have a standard platform to target, and one that increments faster than consoles.

With the exception of the odd edge cases that demand extreme performance, most gamers are satisfied with gaming notebooks as there really isn't much room left to grind vendors on margins, but there's enough (maybe) for it to be viable in the mid-to-long term.

An equilibrium was reached between vendors, developers and gamers. A stability developed and the cadence of upgrades, costs and so forth became predictable.

While I am quite happy to have that all behind me, doing those sorts of calculations, supply chain management, prototyping, testing and QA is the living heart of DIY data centres. I ran a network of a dozen or so MSPs and companies that collectively managed maybe 5,000 servers across 200 sites.

There are people at Google, Facebook and Amazon doing this for millions of servers across at least as many sites, and that's before we get into networking, managing WAN, power, cooling and so on. Most of this I did, too, but there aren't a lot of WAN or power suppliers in Canada and cooling is easier.

DIY data centre types of today are fortunate. The market as a whole has ground down the margins on servers to the point that the Open Compute Project handles most of this. For those needing a little bit more vendor testing and certification, Supermicro systems with their integrated IPKVMs are such good value for dollar that you can go the DIY route but still get most of the benefits of COTS and still keep it cheap.

The ODMs are getting in on the deal. Huawei, Lenovo, ZTE, Xiaomi, Wiwynn/Wistron, Pegatron, Compal and Lord knows how many others are now either selling directly to customers or selling on through the channel with minimal added margin.

Recently, it has been noted that this is affecting storage. It's only noticeable there because – unlike servers – it's a relatively new phenomenon. Networking is next, and I wouldn't want to be the CEO of Cisco right about now.

More about

TIP US OFF

Send us news


Other stories you might like