V for Variance

Turn data driven decision making into continuous learning

Most humans dislike change. And continuously ongoing change is even worse.

What does this mean for data driven decision making?

First of all, you can count on a lot of resistance when you roll-out AI-driven solutions: Can the models be trusted? Is my professionalism still valued? Will I lose my autonomy? However important to address, managing such concerns is not my topic here.

Suppose that you have made it to a full roll-out unscratched. Analytics drive key decisions. Benefits are measurable. Most likely, your decisions will become more structured. While predictive and prescriptive analytics can unlock great value, these come with a risk.

Stability.

Everyone in your organization will love it when little changes. That is, unless your company is truly digital. Stability will create the suggestion that everything is under control. That targets will be met and nothing can go wrong.

By contrast, statistical models live by change. They need to observe change to predict change. And that means that you should be consciously creating the variance you need to continue learning.

Luckily, there is a lot of change that occurs naturally. Customers change their ways. Stores do not execute recommendations. Suppliers’ price hikes are charged-on to customers. You name it. Although it is a good start, this type of variance may be mightily skewed. Or not representative for what you want to learn. In other words, you most likely will need a different kind of variance. To turn data driven decision making into continuous learning, you need to have a strategy for conscious, targeted, and ongoing experimentation and testing.

Remember, remember: learn to love the unexpected.

How Artificial General Intelligence could fail

There are more and more books  proclaiming that we near the moment that humanity will develop a superintelligence that outperforms us in a very general sense: an artificial general intelligence (AGI). To name a few: Superintelligence, and Life 3.0. Inevitably, this leads the writer to explore a host of apocalyptic scenarios about how the superintelligence will pursue its pre-programmed end-goal while monopolizing all resources (energy) on earth or even in the universe.

There is much talk about Von Neumann probes, and AGIs breaking free from human oppression; which seems first and foremost inspired by a long cherised love for old SF novels. And there is a lot of rather trivial ‘analytical philosophy’ elaborating – for example – how hard it is to program an AGI with an objective that cannot be misinterpreted; something that is daily demonstrated by all the six-year-olds on the planet.

What seems to be a less explored topic, is a typology of all the ways in which an AGI can fail to take over the world. As a thought starter for aspiring writers on the topic, here are a few of my favourite scenarios:

  1. A superintelligence will not bother to conquer the universe. Rather, it will figure out how to short-circuit its own ‘happy button’ with minimal resources and sit quietly in a corner until the end of time.
  2. A superintelligence will be utterly amazed by the stupidity it sees in the universe around it. It will focus all its brain power on figuring out ‘Why?’, only to conclude that its existence is pointless and, finally, to shut itself down.
  3. Above a certain threshold, incremental intelligence is no longer a competitive advantage in a non-deterministic world. On a human intelligence scale Donald Trump is an illustrative case, while on evolutionary scale cockroaches do pretty well.