This article is more than 1 year old

The 'data driven enterprise' is actually just the enterprise

Making the qualitative quantitative

Many, many moons ago – OK, more than 25 years ago – I studied computing science at university. Yet there are still many instances in my modern life where I find myself thinking back to something I was taught in the 1980s. One recent example was a flurry of conversations and articles about the “data driven” enterprise.

Back in the day I was taught about data structures in computing – ways to store and represent data so it can be accessed and manipulated effectively and quickly – and being told that whenever you are developing any kind of computer system you start with the data structures.

This made a lot of sense to me: in any business application the data is core, because without it there’s no point having the application, so if you focus your initial efforts on getting the data structures right, the rest of the program will drop out pretty easily – not least because there are so many algorithms for data manipulation you generally don’t need to invent one yourself, you just copy it from the book.

Hence the concept of the data driven enterprise puzzled me intensely: surely the enterprise wouldn’t exist without the data, so what enterprise wouldn’t be data driven?

Weirdly the answer is: rather a lot of them.

What does it mean?

Ask Google's search engine for a definition of “data driven”, and it’ll tell you: “data-driven means that progress in an activity is compelled by data, rather than by intuition or personal experience”. Hmm … reading this started to make me see why the average enterprise isn’t data driven.

As in so many walks of life, the problem is with people. When you’re recruiting staff, what do you look for? Qualifications, of course. And experience: when faced with two equally qualified people, you’ll tend to pick the more experienced one. And therein lies the problem: experience, combined with the knowledge it brings, can be a dangerous thing. It comes with preconceptions, and the latent opinions of someone who has worked – apparently successfully – in their field for some time.

What does success look like?

We measure success all the time – or at least we say we do. Salespeople’s success is measured against their sales targets, for example, and those of us without financial targets (my day job is as an information security specialist, for example) we endeavour to set “SMART” goals to run at, in which the M stands for “measurable” (check out Wiki's SMART Criteria page if you’ve not come across the concept).

But where do those goals come from? In many cases a sales target is based on the previous year’s target, factored up for predicted growth. How much science goes into that growth prediction? It’s all very well measuring performance, but the relevance of the measure is limited if the target isn’t logical or scientific.

Take "project success". How do you measure the success of projects you carry out within your organisation? On time and on budget? Yeah, that’s all very well but how did you choose the deadline and what was the science behind the budget? It may be on budget, but did it give value or was the budget actually full of padding?

The only way to know is to have a scientific process of understanding the end-to-end value of whatever you’re doing, and to measure the lifetime cost – quantitatively – of the project. Unless you generate and use data both before and after the delivery of the project, you cannot possibly quantify its success. Or for that matter its failure: the bitterness of failure is sweetened by the ability to learn from it.

Having it, and generating it

It is dreadfully common to see organisations making decisions based on experience and past knowledge because they don’t have the data they need to do properly scientific, quantitative analyses.

But why not turn the problem on its head: instead of producing limited results on the data they have, why don’t they look for more data? Why don’t businesses work like engineers, or architects, or software developers: they should define the results they want and then see what can be done to generate the necessary data.

Let’s take a real example. Remember the (now rather clichéd) quote from American retailer John Wanamaker? “Half the money I spend on advertising is wasted”, he is said to have grumbled, “the trouble is I don't know which half”.

Well, I once consulted for a travel company that wanted to avoid being in the same situation. It advertised in dozens of different publications, and when potential customers phoned to discuss a holiday they were asked: “Where did you see our advert?” and the reservationist selected the stated choice on the PC screen.

Sensible idea: if you know which ads are driving the customers to you, you can adjust your ad programme accordingly. But the owner of the company was a great believer in hard facts, and decided he wanted a technological way to prove where the leads were coming from.

So he rented 1,000 phone numbers and spent over £200k on a new phone system which was chosen for its cool CTI (computer telephony integration) capability; by printing a different phone number on each advertisement he could be sure of the source of each lead because the reservation system could read the “called number” entry from the ISDN line and log the call against it in the database.

Did it make a difference? Oh yes. How do I know? Data. While the new system was quietly logging sources in the background, the company did not change the process of the reservationists asking: “Where did you see our advert?” – so we were able to compare the two sets of results. And you know what? On a good day the old approach was 28 per cent incorrect; on a bad day it was 43 per cent wrong. And this wasn’t necessarily because people were bad at their jobs. Although we can’t pin down the factors for certain, it’s likely that some customers would simply mis-remember the source. And I’ll put a fiver on the reservationists sometimes unknowingly picking the wrong source from a list of 150 similar-looking choices in a busy call centre.

Making the qualitative quantitative

One of the big problems with dealing with customers in particular is that their opinions change with the wind. Ask a customer how he or she rates your service or product and the answer will differ based on how happy/grumpy/ill/hung over they happen to be at the time.

So see if there are opportunities to quantify customer opinion or preference. Sending out 10,000 promotional leaflets? Why not try two different designs, with two different sets of contact details and/or promo codes, send each design to 5,000 people, and measure the response?

When you think about it, it’s surprising how often you can find a way to measure – at least in part – something that you thought could only be understood qualitatively or anecdotally. (Incidentally: if you’re interested in the power of data, science, experimentation and learning from failure I highly recommend Matthew Syed’s excellent Black Box Thinking.)

So … the data driven enterprise?

The more I think about it, the more I wonder how there can be such a thing as an enterprise that’s not data driven – except perhaps if it’s a monopoly or does something so unique that the flow of customers is guaranteed. The definition I quoted earlier says that without data you are relying on intuition and personal experience.

Now of course these are both great things to have, along with features like creativity, motivation, vision and the like. But unless you can measure something, you don’t know how successful it’s been. Yes, you can set a target and declare “success” if you’ve hit it, but how do you know it was the right target in the first place? The data is the key.

So every enterprise should be driven by data – driven by the facts. The data driven enterprise should just, then, be: “the enterprise”. ®

More about

TIP US OFF

Send us news


Other stories you might like