This article is more than 1 year old

Cognitive computing: What can and can’t we do, and should lipreading be banned?

Daisy, Daisy, give me your answer do

Next year will mark the 60th anniversary of the Dartmouth Artificial Intelligence (AI) Conference. That conference, which marked the birth of AI research, explored whether machines could simulate any aspect of human intelligence.

Since then, Google has developed a self-driving car, computers can type what you speak, and phones have become really good at playing chess.

We’ve come a long way, but now, cognitive computing promises to take us a step further. Ever since IBM’s Watson computer won Jeopardy, researchers have been busy working on the idea that computers can solve the kinds of woolly, messy problems that humans deal with on a daily basis.

Professor Mark Bishop, Director of the University of London Goldsmith’s Tungsten Centre for Intelligent Data Analytics, sees different definitions of cognitive computing.

The commercial one focuses on solving those ambiguous, uncertain problems that humans were always good at, and that traditional computers couldn’t do. Things such as medical diagnoses, for example.

Bishop is an associate editor of the journal Cognitive Computing, which also harbours other definitions. In particular, “biologically-inspired computational accounts of all aspects of natural and artificial cognitive systems”. In short, computer simulations of brains. IBM’s Blue Brain project uses a Blue Gene supercomputer to do that, while the EU-funded Human Brain project is another.

A no-brainer

These two approaches have different goals. One seeks to create a platform akin to a real human mind, possible opening the door to explore things such as consciousness and emotion. The other seeks to focus on real-world tasks without needing a computerised version of a real brain to do it.

That mirrors the divergence in artificial intelligence theory itself. ‘Human-level’ AI was what some envisaged at the original Dartmouth meeting. But many have satisfied themselves with systems that mimic narrowly-defined functions, such as self-driving cars or chess computers.

Perhaps cognitive systems, as commercially defined, inch a little further along the spectrum. They still work in relatively narrowly-defined areas, but they can adapt and learn within those areas, and can handle more complex tasks that require context and complex interaction.

Cognitive systems may not think like people, or feel emotions, but they can discover vast amounts of data, draw decisions from it, and then engage people effectively.

Discovering data

Discovering things about the world around it is a key part of the process for a cognitive system.

“With cognitive computing there is a underlying knowledge model, specifically a semantic model, of the domain and associated cognitive processes, such as decision processes, that are relevant for that domain,” said Tony Sarris, founder of N2Semantics, a consulting firm that works in semantic technologies.

Cognitive systems are able to understand natural language questions and spit out understandable answers because of the taxonomies that they build up around specific knowledge domains.

A cognitive system for tax services wouldn’t be able to answer the same questions as one for medical researchers, for example, because it wouldn’t understand the necessary concepts and how they fit together.

Techniques designed to teach computers about different knowledge domains have been developing for years. “Originally, the grandparents of cognitive computing were manually constructed ontologies created in the late eighties and early nineties,” said Sarris.

In the early 2000s, the semantic web movement tried to create open data models using the same concepts. But the real innovations come when machines can react to ontological data, testing relationships and learning from the results, in a form of machine learning.

“Cognitive systems, like their human counterparts, have a major focus on learning, including feedback loops,” said Sarris. “In the latter case, that's usually comparing the results of an action taken, or a decision made, to the desired outcome, and taking into account what worked or what didn't.”

This is why cognitive systems tend to get better as they go along. They create models of the world based on what they try, and what results they get back.

Human beings are very good at basic evidence-based learning too, of course, which is why children learn early on not to do basic things that might hurt them. But applying that to complex business situations is difficult. The challenge lies in the sheer volume of information that specialists must consume.

Cognitive systems can help here, by processing far more evidence than a single human being ever could, munching their way through Terabytes of structured and unstructured data alike, and putting it in context.

This idea of context is particularly important in cognitive computing, and was one of the key characteristics outlined in a joint definition of the topic by a working group on the subject, which included experts from IBM, Microsoft, Oracle, HP, Google and Cognitive Scale.

Context goes far beyond simply relating concepts together in semantic ontologies. It includes a variety of data points ranging from physical location, time, or current task, through to what a user is doing and where they are doing it. Their role – who they are – is another data point that might feed into a cognitive systems’ decisions.

Next page: Digital decisions

More about

TIP US OFF

Send us news


Other stories you might like