nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

Stop the IoT revolution! We need to figure out packet sizes first

Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops

By Richard Chirgwin, 25 Nov 2014

While the world gets excited about Internet-of-stuff saving people from the exhaustion that follows putting down a smartphone or using a remote control to adjust the thermostat, there's a bunch of research still needed to get more serious applications like sensor networks bedded down.

It's not just that the world is choosing between different standards at the physical and MAC layers, but also that the performance and behaviour of sensor networks is under-explored and ill-understood.

A recent contribution to the state of play comes from a collaboration between researchers at the University of Duisburg-Essen in Germany and the Norwegian University of Science and Technology.

In this paper at Arxiv, the group takes a look at the behaviour of 802.15.4 – the basis for popular Internet of Things specs like ZigBee and other low-power radio technologies, defining the physical and MAC layers – under a vast number of different configurations.

Promising that their raw data will be made public, Duisburg-Essen's Songwei Fu, Chia-Yen Shih and Pedro Marron and NUST's Yan Zhang and Yuming Jiang have collected throughput, packet loss and other stats covering 200 million packets under nearly 50,000 different parameter configurations in their 802.15.4 test.

Using the TinyOS 802.15.4 stack configuration on TI radios, the researchers used a very simple setup: pairs on nodes in an indoor office environment, communicating over a range of distances between 10 and 35 metres, and just to make things a little more messy, in the presence of several WiFi access points.

At the physical layer, the researchers varied the transmission power as well as the distance; at the MAC layer, they fiddled with the queue size, maximum retries, and retry delay; and at the application layer, they adjusted packet payload size and packet inter-arrival time.

Their overarching conclusion (there are too many individual inferences in the paper to be reproduced here) is that in optimising the behaviour of a network, the interactions up and down the stack have to be taken as a whole – which is probably no surprise:

“Larger queue size and higher number of allowed retransmissions can reduce PLR and increase goodput. However, they will result in an increased delay. If the link is in the transitional zone, increasing transmission power such that the SNR moves to the low-loss zone improves all QoS metrics and even the energy consumption. If the link can only stay in the transitional zone, the QoS metrics and energy efficiency are more sensitive to the stack parameter configuration. In particular, a moderate packet payload size may better balance the trade-offs between the performance metrics.”

Once the data is published, Vulture South expects a whole heap of other tweaking possibilities to pop up. ®

The Register - Independent news and views for the tech community. Part of Situation Publishing