This article is more than 1 year old

16 terabytes of RAM should be enough for anyone. Wait. What?

AWS is cooking virty servers with 16TB but thinks we'll also need clusters packing 34TB

Amazon Web Services is working on new instance types that will offer either eight or 16 terabytes of random access memory.

You read that right: 8TB or 16TB of RAM, with the target workload being in-memory computing and especially SAP HANA. The cloud colossus is also working on HANA clusters that span 17 nodes and pack a combined 34 TB of memory.

There's also a new instance type in the works, dubbed “x1e.32xlarge” packing 4TB of memory and offering 128 vCPUs on four 2.3 GHz Xeon E7 8880 v3 CPUS. But this instance type will only be sold as a virtual private cloud, an arrangement under which users can apply their preferred IP addresses to instances to their rented clouds and impose their preferred security policies on AWS.

AWS isn't saying when the 8TB and 16TB instances will arrive or what they'll be named. But what is clear is that RAM remains pricey stuff and plenty of organisations would blanche at the cost of even 4TB of the stuff never mind 16TB or 34TB. That AWS is cooking up server instances and clusters with these levels of memory suggests there's lots of demand for HANA at scale, but perhaps less capacity to acquire and operate the kit required to run it.

By making the 4TB-capable instances follow on-premises security rules, AWS is also hinting that there's demand for hybrid in-memory rigs.

Whatever the reason, this is a bit of a What A Time To Be Alive moment, because if nothing else being able to hire 16TB servers by the hour is remarkable in and of itself. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like