Rich repository of one-liners for those who seek to make bold moves.
V for Variance
Turn data driven decision making into continuous learning
Most humans dislike change. And continuously ongoing change is even worse.
What does this mean for data driven decision making?
First of all, you can count on a lot of resistance when you roll-out AI-driven solutions: Can the models be trusted? Is my professionalism still valued? Will I lose my autonomy? However important to address, managing such concerns is not my topic here.
Suppose that you have made it to a full roll-out unscratched. Analytics drive key decisions. Benefits are measurable. Most likely, your decisions will become more structured. While predictive and prescriptive analytics can unlock great value, these come with a risk.
Stability.
Everyone in your organization will love it when little changes. That is, unless your company is truly digital. Stability will create the suggestion that everything is under control. That targets will be met and nothing can go wrong.
By contrast, statistical models live by change. They need to observe change to predict change. And that means that you should be consciously creating the variance you need to continue learning.
Luckily, there is a lot of change that occurs naturally. Customers change their ways. Stores do not execute recommendations. Suppliers’ price hikes are charged-on to customers. You name it. Although it is a good start, this type of variance may be mightily skewed. Or not representative for what you want to learn. In other words, you most likely will need a different kind of variance. To turn data driven decision making into continuous learning, you need to have a strategy for conscious, targeted, and ongoing experimentation and testing.
Remember, remember: learn to love the unexpected.
Everytime you win an NBA championship is different
A surprisingly ‘zen’ view on creating a high performing team.
The principles of the enlightment are still the main driver for human progress
Steven Pinker – Enlightnent now
Considering his plea for scientific thinking, Pinker is remarkably confident on (1) hard to assess long term risks and (2) strong realism (in the epistomological sense).
To be productive, choose goals you care for and aim for a sustainable balance of efforts
Chris Bailey – The productivity project
A bunch of unstructured and badly documented tests by a frat boy, who presents his efforts as “experiments”.
The success of Uber and AirBnB is (partly) due to structural exploration of legal limits
Most illustrative are the descriptions of failed competitors, which show importance of both luck and ruthlessness.
There are 100s of underappreciated scientific concepts that deserve to be widely known
John Brockman – This idea is brilliant
A rollercoaster ride through a laundry list of hot topics in science today.
Live by the philosophy of the stoics, but do not take their advice too literally
Brinkmann’s many nuances and exceptions kill his argument and concept.
N.B. Read in Dutch translation
Financial modelling is not the physics of markets
Emanuel Derman – Models.Behaving.Badly.
Derman’s discussion of models in life, physics, and finance is not a juicy as the title suggests, but it offers some good one-liners nontheless.
Take full responsibility, keep it simple, ensure the team believes in the mission, and act decisively
Extreme ownership – Leif Babin and Jocko Willink
A no-nonsense approach to leadership, accompanied by an overdose of war stories.
Unlike ‘to lie’, ‘to bullshit’ implies an utter indifference towards the notion of truth
Entertaining and still eerily relevant (although already published in 2005).
Digitization, network effects, and participation will continue to disrupt many markets
Machine, Platform, Crowd – Andrew McAfee and Erik Brynjolfsson
Decent summary of developments with some nice examples, but not sufficiently new or surprising to classify as ‘essential reading’.
Risk is an important disincentive, needed to keep economical systems healthy
Skin in the game – Nassim Nicholas Taleb
Written in Taleb’s highly entertaining style, at times overly cocky but with more than enough wisdom to make up for it.
Developing nuclear physics required a lot tinkering and failing
Atomic Adventures – James Mahaffey
Refreshing view on history of nuclear physics with emphasis on ‘failures’ like cold fusion and nuclear rocket engines in this often counter-intuitive branche of science.
You have to work hard before good stuff manifests itself
Mike Dooley – Playing the matrix
Feel-good take on: ‘there is no such thing as a free lunch’ from the guy who (somewhat pretentiously) signs his daily newsletters with: “The Universe.”
How Artificial General Intelligence could fail
There are more and more books proclaiming that we near the moment that humanity will develop a superintelligence that outperforms us in a very general sense: an artificial general intelligence (AGI). To name a few: Superintelligence, and Life 3.0. Inevitably, this leads the writer to explore a host of apocalyptic scenarios about how the superintelligence will pursue its pre-programmed end-goal while monopolizing all resources (energy) on earth or even in the universe.
There is much talk about Von Neumann probes, and AGIs breaking free from human oppression; which seems first and foremost inspired by a long cherised love for old SF novels. And there is a lot of rather trivial ‘analytical philosophy’ elaborating – for example – how hard it is to program an AGI with an objective that cannot be misinterpreted; something that is daily demonstrated by all the six-year-olds on the planet.
What seems to be a less explored topic, is a typology of all the ways in which an AGI can fail to take over the world. As a thought starter for aspiring writers on the topic, here are a few of my favourite scenarios:
- A superintelligence will not bother to conquer the universe. Rather, it will figure out how to short-circuit its own ‘happy button’ with minimal resources and sit quietly in a corner until the end of time.
- A superintelligence will be utterly amazed by the stupidity it sees in the universe around it. It will focus all its brain power on figuring out ‘Why?’, only to conclude that its existence is pointless and, finally, to shut itself down.
- Above a certain threshold, incremental intelligence is no longer a competitive advantage in a non-deterministic world. On a human intelligence scale Donald Trump is an illustrative case, while on evolutionary scale cockroaches do pretty well.
Jesus of Nazareth was ‘just another sect leader crucified for high treason against Rome’ (which is down-played in the gospels to make Christianity more socially acceptable)
Convincing and elegantly developed argument, building on limited historical evidence and close reading of biblical texts in historical context.
There is a myriad of ways in which AGI can be scary, but also a whole array of options humanity can pursue to stay on the top of the food chain
Mike Bostrom – Superintelligence
More thorough and nuanced than most scary-AI-will-take-over-the-world-books, but it still suffers from the same pitfall: over-estimating the importance of superintelligence for evolutionary success (two random examples: cockroaches and Donald Trump).
Build safety, share vulnerability, and establish purpose
Daniel Coyle – The culture code
Rich collection of cases that jointly convey an important message – even if the individual annecdotes may be somewhat over the top.
Exposure therapy is highly effective to overcome fear of rejection
Contageous enthusiasm of authentic curiosity comes across best in his Jia Jiang’s youtube videos (cf. Olympic rings).