There are many tasks that human beings are good at, but don’t know how we do. Recognising the number ‘3’ is one example; another is understanding spoken language. These tasks are where machine learning has come into its own.

Another direction has been less explored. Instead of learning to do things that people do well, ML could learn to do things that people do badly.

There are plenty of things our brains don’t seem to be naturally good at. One example is conditional probability: it has been repeatedly demonstrated that we tend to come to the wrong conclusions. Paradoxically, though, the only reason we know we’re bad at probability is because we know how to work out the right answer.

So, there are two groups of things we know how to do. One consists of the ‘obvious’ things you don’t even need to think about, like recognising a friend’s face. The other consists of things we can figure out, or tell ourselves a story about how to do. This story might be spelled out in terms of logic, or crystallised into equations.

By this definition, science falls squarely into the second category. We invented science precisely because our intuitions about the world have limitations. To learn anything new, we have to invent stories about how the world works, and then see how well these stories hold up.

Stories make our thinking incredibly flexible, but they also have their limitations. Any system with many interconnected feedback loops will be hard to capture in story form. Unfortunately, these include many of the systems we would most like to understand, such as the brain, the atmosphere, and the economy.

This is where ML could have real power. Right now, we are recreating the intuitions evolution happened to provide; why not create some new ones? There seems no reason why an algorithm couldn’t learn to recognise an unstable market as easily as we recognise each other.

This kind of project would be harder than teaching machines to copy things we’re good at, precisely because we don’t know the right answers. But the potential rewards would be much greater.