This article is more than 1 year old

Is Britain really worse at 4G than Peru?

Crowd or non crowd - network industry rages on numbers

Special Report Which? magazine's claim that the UK has "worse 4G than Peru", widely reported by the national media this week, has reopened an highly charged industry debate about the reliability of network data collection.

The debate can be crudely summed as “crowd vs non crowd”, but actually goes deeper: does enough "Big Data" from a large, unknown number of testers produce better results than more rigorous representative sampling?

Which? used data collected by VC-backed OpenSignal, a kind of Wikipedia of network performance monitors, which crowdsources its data collection. Anyone can download the OpenSignal app and become a tester. The app collects samples taken every 15 minutes when the phone’s display is on, a background test that’s run every five days, and also manual, user-initiated speed tests. OpenSignal doesn’t know who is using the app – it’s all about the big picture.

This approach is called opportunistic sampling, and it contrasts with expensive "drive-test" monitoring, which rigorously records the performance at known locations on known networks. The challenge for drive test firms is making sure their data is representative of the public’s experience. The challenge for opportunistic crowd sampling methods is ensuring the “crowd” is actually representative of the wider public.

Network performance firm Global Wireless Solutions is firmly in the drive-test camp, as is RootMetrics, now owned by analytics giant IHS Markit. The latter, interestingly, first made its name using crowd data, but has now switched to using drive tests.

One hypothetical example we put to all three firms above highlights the difference in their approaches.

Imagine that there’s one LTE base station in Antarctica, on a remote research base. The staff never leave the range of that base. Using crowd data, OpenSignal could conclude that "Antarctica has 100 per cent LTE coverage". An absurdity?

Not all, OpenSignal’s CTO, Dr William Webb, explained to us. That’s not a bug of crowdsourced data, it’s a feature.

"It's a great example. The data tells you there is 100 per cent LTE coverage for where the people actually are. If they stray out of range, then it falls below 100 per cent," he said.

“That’s because it’s experiential data not geographical data. Geographically using that example, Antarctica has 1 per cent coverage and the UK has 98 per cent. It’s not sensible to compare the two. Experiential coverage really tells us what people do when they go about their daily lives,” says Webb.

“If one country has 80 per cent experiential coverage, and another 60 per cent, it’s fair to say they have a better experience in that first country.

However, rival industry performance watchers point to a series of problems with the crowd approach.

“If you’re saying [from the hypothetical cited] Antarctica has 100 per cent coverage, you just don’t know if it’s true. That’s where the crowd falls down,” says RootMetrics European general manager, Scott Stonham.

“The idea of using crowd-generated data, as OpenSignal and Which? have done, might seem appealing to the public and the numbers appear impressive, but from our experience it doesn’t give a true picture of mobile coverage and performance in the UK,” Stonham said.

“The key to understanding crowd is that the data generated from it is influenced by the incentives and motives of the person using it. Those that suffer with poor performance are inherently more likely to download and use a crowd-based app to understand their issues, than those that enjoy better performance,” he explains.

“You have a bathtub curve, lots at either end and less in between. You have to be careful what you do with that.”

The answer, he says, is to layer scientific data on top of crowd, test more cities and gather more data, and the company issued a blog post this week.

“People say it’s all about numbers, and that doesn’t matter. But even with 100 million from across all the UK spread nicely, you don’t know where’s it from. Is it from one operator, or two or three or four? That’s where the experiential defence falls down. The people are not in all the places all the time,” says Stonham.

Global Wireless Solutions, which uses controlled field testers equipped with rucksacks packed with mobiles from all the operators, agrees.

“Cataloging the percentage of 4G coverage and throughputs delivered to a device based on random data collected in an uncontrolled manner represents a limited sampling of the many metrics that could be included when measuring performance."

Equally as important, any measurement used in this type of “country to country” comparison would need to include weighting factors to account for those external circumstances and influences that would otherwise make a such a comparison inequitable,” he says.

Stonham said he found the international results interesting when OpenSignal reports appeared to be better than even the network operators’ own propagation maps, “which tend to be optimistic”.

More about

TIP US OFF

Send us news


Other stories you might like