This article is more than 1 year old

Rackspace to build custom servers, storage for cloud biz

What's good for the Zuck is good for the Rackers

Open Compute 2013 Rackspace Hosting is getting into custom server design, and it is working with manufacturing partners with the Open Compute Project to get its tweaked versions of servers, storage arrays, and racks created by Facebook to run the social network manufactured by multiple suppliers.

The hosting giant and cloud contender has made no secret of the fact that it has found open source religion when it comes to software and that is why it co-founded the Open Stack cloud controller project with NASA more than two years ago and joined the ranks of hyperscale data center operators that rely on open source as well as homegrown applications to run their business.

Rackspace admitted in early 2011 that it was moving towards whitebox servers and away from machines supplied by tier one server makers, but it was not clear that Rackspace would go all the way and not only embrace OCP designs for IT gear, but also set its own hardware engineers about customizing the custom servers.

But that is precisely what Rackspace is now doing, explained Mark Roenigk, chief operating officer at the company, at last week's Open Compute Summit in Santa Clara.

It was not all that hard to figure out that this was the plan, with Rackspace making a lot of noise about the openness of OpenStack and being one of the founding members of the Open Compute Project, which was started by Facebook a year and a half ago and which is gaining steam while at the same time letting some of the air out of the tires of the bespoke server businesses of Dell and Hewlett-Packard.

The largest hyperscale data center operators have figured out that they can cut out most of the middlemen and go straight to ODMs in China or Taiwan and get custom motherboard and systems built and not have to pay Dell, HP, or anyone for the privilege of not taking one of their plain vanilla x86 servers.

Google has been making custom servers for years, and Amazon dabbles in them, too, although it also buys a certain amount of Rackable-brand gear from Silicon Graphics. Microsoft uses a lot of custom Dell and HP iron for its search and cloud infrastructure, and Facebook used to be the poster child for Dell's Data Center Solutions custom server business until Zuck's engineers decided to do the work themselves and design data centers and servers that fit like a glove over fingers.

The combination of OpenStack running on Open Compute servers will let Rackspace marketeers claim they are offering the most open cloud on the planet, and if Rackspace opens up the designs of its systems and gives them to Open Compute, then companies could in theory have exactly the same iron in their own private cloud running the same release of OpenStack as Rackspace is hosting in its own data centers.

This could simplify the supporting of hybrid clouds that span private and public computing and storage. This is also precisely what Amazon says it will never sell. The only cloud that Amazon really believes in is its own public cloud, its partnership with Eucalyptus Systems to make its eponymous cloud control freak compatible with EC2 notwithstanding. OCP machinery could be a differentiator for Rackspace on both the home and the external data center fronts.

The Open Rack standard that Facebook and its OCP friends launched last May and that is being deployed by the social network in its Forest City, North Carolina and Lulea, Sweden does not fit the needs of Rackspace, explained Roenigk. And so it is tweaking the Open Rack design with the help of an ODM named Delta, which makes power supplies, among other things.

The custom Open Rack from Rackspace

The custom Open Rack from Rackspace

The changes are not huge, but significant. The Open Rack used by Facebook takes all of the power supplies out of the servers and puts them on three power shelves. This was not the right mix of shelves and supplies for Rackspace – Roenigk did not explain why – and so it has one power shelf in the middle of the rack and two zones for servers or storage rather than three power shelves and three equipment zones as in the Facebook-designed racks.

To each their own, and from each according to their specs.

Facebook Open Rack versus Rackspace Open Rack

Facebook Open Rack versus Rackspace Open Rack

The differences between the two racks, as you can see above, are more than that. The Rackspace version of Open Rack is a bit taller, and it is also designed to have a slightly higher power density as well – between 14.4 and 16.8 kilowatts per rack compared to 12 kilowatts for the Facebook rack.

Facebook has AC or DC power options, but Rackspace does not because in a lot of cases, it is co-locating its data centers in the facilities of others, so it has to do AC power until DC becomes more common or it decides to build its own data centers as Facebook has done.

Rackspace is taking out the batteries that Facebook uses in its Open Rack and pitting in a power distribution unit for networking gear as well as a cable management bay that Facebook does not have.

According to a report in Data Center Knowledge, Rackspace will be deploying its modified Open Racks in a new data center in Ashburn, Virginia just down the road from the Equinix facility that hosts the East region of the Amazon Web Services cloud.

These 21-inch-wide racks require some customization on the part of the co-lo facility operator, DuPont Fabros Technology, but with Rackspace spending approximately $200m a year on servers and storage, DuPont Fabros no doubt doesn't mind making exceptions because, frankly, if it doesn't then Equinix can.

On the server front, Rackspace is working with Quanta Computer and Wiwynn, the US arm of ODM Wistron, to co-design and build servers and disk arrays that will slide into these modified Open Racks. The server designs are tweaked versions of Open Compute servers and Open Vault arrays, which again have been modified to meet specific needs that Rackspace has.

The Rackspace three-node server manufactured by Wiwynn

The Rackspace three-node server manufactured by Wiwynn

The first new server is a variant of the three-node sled server called "Winterfell" that Facebook is using as its Web server in the Swedish data center. The social network did not provide any details on the Winterfell machines last week, aside from a picture, but the variant that Rackspace is having built by Wiwynn has some feeds and speeds.

It is using Intel Xeon E5 processors, and it has a total of sixteen memory slots for a maximum of 256GB of memory across the sixteen cores in the box. The nodes have a RAID controller with cache memory to link out to external storage plus one 3.5-inch SATA drive that slots into the vanity-free server designs.

The motherboard snaps in network controllers through mezzanine cards, and in this case there are two 10 Gigabit Ethernet ports in the mezz slot. There are also two more 10GE ports on a PCI-Express slot. This Wiwynn variant of the Winterfell box is in production now.

The second Rackspace server, which is being manufactured by Quanta, is still in development and includes hot swap fans as well as improved cable routing over the Facebook design.

The Rackspace-Quanta four-sled dense server

The Rackspace-Quanta four-sled dense server

This Quanta server looks very much like a Twin2 chassis from Super Micro and puts four nodes and two redundant power supplies into the 1.5U chassis. The nodes are also two-socket Xeon E5 machines with sixteen memory slots that max out at 256GB of main memory. The nodes have the same networking and RAID controller options as on the Wiwynn machine but the nodes have two 2.5-inch SATA disk drives and the nodes are wider as well as being shorter.

Rackspace has also tweaked the Open Vault (PDF) storage array that Facebook put at the heart of its cold storage for the Facebook Photo service. This JBOD disk array is code-named "Knox" inside of Facebook and the design has been opened up at the OCP. Rackspace says the variant it has put together is already being manufactured by Wiwynn and has 30 3.5-inch SATA drives in each system, which are linked to a storage server node in the chassis by four SAS interfaces.

Going hand-and-hand with its Quanta server, Rackspace has also cooked up its own JBOD storage array:

Rackspace's own twist on a JBOD array

Rackspace's own twist on a JBOD array

This array is still in development, and doesn't have the big hinge that looks like more trouble than it is worth with the Open Vault design. The Rackspace-Quanta JBOD, which does not yet have a code-name, packs 28 3.5-inch drives in a single chassis and has the same four SAS interfaces feeding back to the servers as the Open Vault design. The front of the chassis has drive carriers and is hot swappable, as is the fans that cool the disks.

The Quanta server and storage boxes will be in production in the second quarter and are in their testing cycles now.

Why is Rackspace is doing OpenStack and Open Compute? To save money, plain and simple.

"We don't have a large supply chain organization, nor do we have a large product engineering organization," Roenigk explained in his keynote. "When we put resources into something, we need to make sure we get a lot of value out of it."

Specifically, the combination of OpenStack plus Open Compute iron is projected to deliver around 40 per cent capital expense savings and around 50 per cent operational expense savings, according to Roenigk. This money is important, to be sure, but Roenigk says that "the speed that we have been able to implement is every bit as important."

It now takes anywhere from five to nine months less time to get systems running software onto the floor than it did before moving to OpenStack and Open Compute iron. This is a competitive advantage. Well, at least until all other service providers move to open source hardware and software. ®

More about

TIP US OFF

Send us news


Other stories you might like