This article is more than 1 year old

Exercises to keep your data centre on its toes

Flatten the structure to stay nimble

Given the size of networks today, networking should be open to promote interoperability, affordability and competition among suppliers to provide the best products.

Let’s drill down a little to explore new developments in the ubiquitous Ethernet standard and see how open networking can help you do jobs more efficiently.

Hub and spoke

If Ethernet network packets were airplanes, we would at once see how the network infrastructure has a choke hold on overall network speed.

Data centre networking is reminiscent of hub-and-spoke airports where passengers at peripheral airports must fly via hubs. If you want to fly from, say, San Jose in California to Edinburgh in Scotland, you might fly from San Jose to Chicago, from Chicago to London Heathrow and from Heathrow to Edinburgh. Four hops take a lot longer than a direct flight.

Currently Ethernet networks have a very controlled and directed infrastructure, with edge devices talking to each other via core switches and pathways through the network controlled by a technology called spanning tree.

This design prevents network loops by ensuring that there is only one path across the network between devices at the edge of the network.

Avoid the jams

There is also generally a transition from Layer 2 of the OSI networking protocol. This provides the facilities to transfer data between devices in a single section of a network, up to Layer 3, which transfers data between sections of an overall Ethernet network, and then back down to Layer 2 again.

These network architectures worked well back in the day, but no longer. Data networks organised along these lines may be overwhelmed by today’s huge volumes of data traffic.

To avoid traffic snarl-ups, organisations must move data across networks faster, using multiple links or lanes, and if they can, aggregate sections of a network into a single area to avoid the need for Layer 3 control of data movement between the sections. This is called “flattening the network”.

How can this be done? One way is to make better use of the available paths in the network with multi-pathing switches that know about other switches in their domain and can set up and tear down links dynamically.

Brocade advocates a standard called Trill (transparent interconnect of lots of links) for this, in conjunction with its VCS fabric technology. HP has the FLexFabric architecture, and Cisco, Juniper and other suppliers have their own Ethernet fabric-flattening technologies.

Why is traffic growing so much? In part it is because server virtualisation has fuelled more applications running inside virtual machines on servers with multi-core processors.

Traffic growth also reflects the constant on-rush of information into and through data centres. Growth may even accelerate if analyst projections concerning the rise of machine-to-machine sensing and logging prove accurate.

Bypassing the tree

Air travel infrastructure has been painstakingly built up to enhance safety and stop planes colliding, as well as to take advantage of economies of scale at hub airports.

The hub-and-spoke design helps airline and airport operators but not passengers. They could get to their destination much faster by not flying through Heathrow and Chicago.

So too with Ethernet and packets at the Layer 2 level. Data would arrive at its destination more quickly if it could cross the network without having to go up the network tree (“northward”) to the main or core switches and into Layer 3, get processed and then return down the tree (“southward”) to the destination edge device.

This Layer 3 supervision is an obstacle to packets travelling more directly, east-west as it were, and only in Layer 2 between the edge devices.

Ethernet is being transformed to provide edge-to-edge device communications within Layer 2 and without direct core switch supervision.

Intelligence about links and their states is made available to the Layer 2 devices, and multiple paths through a network can be encouraged to increase network link utilisation and the amount of traffic that can flow along a wire in a certain time.

Network resources have to be paid for and wasting cash on underused wires does not make sense. How can that be stopped?

A network is a set of physical resources – wires, switches and routers – with firmware in the switches and routers directing what these devices do. The firmware can be changed to alter this behaviour but such changes are not dynamic. They are not implemented by users but come from suppliers.

This means that network pipes can be underused as the pattern of traffic from particular edge devices changes.

For example, a rack of processors could be upgraded and run three times more virtual machines, increasing network traffic by 300 per cent. But there may be no easy way to reconfigure the network and increase link utilisation.

Admin staff can spin up virtual machines and tear them down on demand

What is needed is for the network to be virtualised, to have its data traffic and its control or management traffic separated, and to give networking staff the ability to reconfigure the network dynamically, setting up different bandwidth allocations, routing decisions, and so forth.

With servers, admin staff can spin up virtual machines and tear them down on demand, with no need to install and decommission physical machines.

With storage, the same disk blocks can be presented as block storage to databases and file storage to file-using applications, or the available disk blocks can be assigned to different applications with the amounts of capacity for reach application being modified as requirements change.

A similar approach is needed for data networking. It needs to be freed from its dependence on slow, non-dynamic changes to its firmware.

A virtualisation layer for networking, described as software-defined networking, would make network administration much more agile, nimble, dynamic and responsive to changing circumstances.

The network system software that would be used to do this could run on any available server, not necessarily in routers or switches. The software communicates to them via an application programming interface (API) and tells them what to do.

In addition, this API could be used to automate network changes in response to real-time changes.

We can see how a cloud service provider would appreciate such an ability to automatically modify networking characteristics in real time when, for example, a customer requires additional bandwidth due to a seasonal surge in business.

The customer could fill in a web form to request this and system software would validate the request and put it into effect.

Open secret

There have to be standards to do this, otherwise it won't be open.

One approach to overcoming this challenge is the OpenFlow protocol. The idea is that networks should be software-defined and programmable to improve traffic flows and facilitate the introduction of new networking features.

Network devices have traffic flow tables that define how incoming data packets are dealt with and where they are sent. With OpenFlow, the flow tables are modified by messages sent from a secure and remote server.

Vendors such as Brocade, Cisco, HP, IBM and NEC are supportive of OpenFlow and its ideas, although developing it into an open standard will take a lot of work.

That work, however, will be worthwhile and much appreciated by data centre network managers. ®

More about

TIP US OFF

Send us news


Other stories you might like