This article is more than 1 year old

The future of the data centre is within

Location, location, location

Promo Future-proofing your data centre is no longer down to a choice of the right servers and storage, it’s now all about connectivity, location and the neighbours.

The advantage – and the problem – with technology is that it’s always improving. Your latest server will always be replaced by a newer, faster model, and the next network will always be upgraded to a faster system.

When you are specifying small projects, it’s a problem that you can live with; servers can be tweaked, with more memory and larger drives added, or simply replaced. However, when you’re specifying a large project such as a data centre that is going to be the centrepiece of your business, then future trends become much more important.

So how do you protect your data centre strategy from the future? Do you build the data centre yourself, do you go for co-location or do you take a leap of faith and create a virtual data centre in the cloud, or a half-way house hybrid cloud solution?

The home-grown data centre is a viable solution - if you have deep pockets. Cloud is perfect if you need to scale quickly and you have the skillsets to manage the new technology, but for most applications a co-location solution is probably the best solution. But just choosing a co-lo solution will not guarantee that your solution is still future proofed. Network connectivity. servers, storage and location are among other factors to consider.

The traditional solution is to buy the fastest and latest technology, which is fine if you have bottomless pockets, but the days of blank cheques for IT infrastructure projects are long since gone for most businesses. However, when it comes to future-proofing your data centre, just selecting the fastest hardware isn’t going to solve the problem. The applications you are running and the location of the data centre is a more important consideration.

The type of application you are to run in the data centre is key to your choice of data centre location and the data centre features and services, particularly as applications are now no longer designed as monolithic one-application-per-box but may now be created from a combination of virtualised infrastructures, micro-services, and containers.

The old one-app-per-box class of application took in data, did some compute, accessed an internal database and spat out an answer. The new class of application – built using technologies like virtualisation, containers and micro-services and accessing multiple external services – presents a radically different workflow to the standard application.

The network effect

From an enterprise architecture perspective, the important consideration is the way data moves. We are well beyond the days of the north-south web services infrastructure, where the data would come in from the network, to some front-end that would access an application service, access data, and then send it back up the pipeline.

Current applications are talking to each other; there are endpoints that need to collaborate, and work together; and there is a much higher degree of virtualisation that is taking place. This combination causes the network flow to spill out in all directions at the same time, creating more north-south traffic and a huge rise in east-west traffic.

In the past, the timescales allowed for this application to complete were in the seconds. The modern application now needs to do it all in milliseconds, even faster if it’s an application in the financial sector where millions of trades are done a second – and that includes the contacts, negotiations and data delivery with third-party systems.

This new workflow is expected to drive a substantial rise in application traffic through the data centre over the next five years. Cisco’s sixth Annual Global Cloud Index Report 2015-2020 shows data centre traffic will more than triple from 2015’s 4.7 zettabytes to 15.3 zettabytes by 2020 with a compound annual growth of over 27 per cent. A substantial part of this additional traffic coming from within the data centre (26.8 per cent growth – from 3.6 zettabytes to 11.7 zettabytes) and from data centre to data centre traffic (31.9 per cent growth – from 0.3 zettabytes to 1.4 zettabytes.)

Cisco believes that data centre to data centre traffic will account for almost a tenth of data traffic and internal data centre traffic (for example, moving data from a development environment to a production environment within a data centre, or writing data to a storage array) will account for more than three quarters (77 per cent) of all data centre traffic.

Overall, east-west traffic (traffic within the data centre and traffic between data centres) will represent 86 per cent of total data centre by 2020, and north-south traffic (traffic exiting the data centre to the Internet or WAN) is only 14 per cent of traffic associated with data centres.

Another key part of the future-proofing choice is to look at the systems you expect to link into. Ideally, you need a data centre that houses the businesses your business is partnering with. The shorter the distance between the third-party organisation and you, the better.

If that connection is in the same building, then so much the better. Locating your business in a data centre where there are private connection points like Microsoft ExpressRoute and AWS Direct Access that link to the hyper-scale cloud platforms like Azure and AWS allows your business to bypass the public internet – and offers a fast, dedicated, and secure access, with increased availability and reduced latency to a multi-cloud environment, enabling you to reduce infrastructure costs, and at some point in the future to select the right cloud for the right workload.

As well as this change in the structure of the applications and the need for forward thinking on connectivity, there’s also the source of the data used in these applications to think of. We are on the cusp of a huge increase in the amount of data traffic into and out of the data centre.

The Cisco Global Cloud Index Report points to a massive growth in data traffic passing through traditional, enterprise data centres, estimating traffic to hit 1.3 zettabytes by 2020, up from 827 exabytes per year in 2015 with total data centre traffic (cloud data centres and traditional data centres) peaking at 15.3 zettabytes.)

This new traffic will come from a combination of areas from data analytics applications, big data application and from the Internet of Things (IoT). Current estimates for the number of IoT devices globally hover around the 20-50 billion level by 2020. The tech analyst firm Quocirca believes that these figures are far too low and is probably more likely to be an order of magnitude larger.

With many of those devices being used and deployed in metropolitan areas, it makes sense to have your data centre near to the new sources of data rather than miles outside of the metropolitan area.

The key is to have your data centre close by the source of the data. Once the Internet of Things becomes an every-day utility, we will have to consider where those devices (eg utility meters, autonomous vehicles, connected household appliances, connected business appliances, and also connected machines) are, and when you look you see that the largest density of these devices will be in the urban areas.

Once you know where your data is coming from, then you have a number of choices, as Bob Landstrom, director of product management at Interxion, explains: “Businesses should look at what’s consuming the data, and where that consumption happens. There are two choices; you can either have a data centre in the suburbs and that connects into multiple edge data centres that collect and distribute the data and connect into the core suburban data centre. Or you have your core data centre close to, the population and the data devices.”

He argues in favour of a metro data centre solution that can serve as a core data centre and as an edge facility. “To serve a congested area like London, it’s much better for your core data centre to be in the same location that you have the edge data centre. It makes your network connectivity simpler, and simplifies the data traffic flows for your applications.”

Lastly, while looking at how your business future-proofs the data centre you should also look at the way you manage and support your data centre and your on-premise systems. Future application solutions based on containers and micro-services delivered in the cloud and utilising software defined networks (SDN), and network function virtualisation (NFW) are complex and difficult.

Support staff and developers who understand these systems are few and far between and are therefore expensive. But you can maintain your current applications using your current staff by looking at a data centre that offers a fully-managed co-location solution with a local footprint. This means you can add future-looking technologies without the problems or the expense of recruiting support staff or the costs of building the advanced network infrastructures necessary to keep up with the cutting edge technologies.

According to, IDG Connect research, a lack of resources and expertise is the number one obstacle to cloud adoption, ahead of even security concerns.

This shortage of skilled staff has led to a huge uptake in hybrid IT approaches, with 49 per cent of organisations now depending on a mix of their own infrastructure and that of third-parties, such as cloud or colocation providers. By 2018, a fifth of applications will be deployed at cloud service providers, while one-in-seven will be deployed at a colocation provider.

Businesses are also moving to colocation because they want to spend their budget on the things they do best rather than invest in mission-critical facilities support staff, according to Landstrom. “We are seeing the data centre skills shortage is one of the things that driving enterprises to use co-location, and to abandon their own efforts to operate data centres.”

The increasing requirement for “connectiveness” with multiple systems inside the data centre and data centre to data centre, means the key to the choice of a good future-proof data centre is no longer down to the hardware you choose to put in to the data centre, instead it’s going to become increasingly reliant on the data centre choice and the location of that data centre. In short your data centre future-proofing depends on location, location, location.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like