New install

Switching to the NVIDIA proprietary graphics driver led to a crash. I did not have a live disk, so I had to do a full re-install.

Couple of tweaks. First of all, there is a more rigorous fix for the NVIDIA driver boot issue here. At least there is now a Grub menu so that debugging is possible.

I have not yet dared to use the NVIDIA driver again. But in the start-up logs there is still an error related to the open source driver which seems to slow down the boot process.

To see boot errors:

journalctl -b | grep error

One of the things I try is this to solve the NXDOMAIN error that I saw in the boot log:

sudo rm -r /etc/resolv.conf
sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

Also followed this tip and re-installed Gnome:

sudo apt-get install --reinstall systemd gnome-settings-daemon gnome-settings-daemon-common

And enabled canonical-livepatch with a new token, as described here.

sudo snap install canonical-livepatch
sudo canonical-livepatch enable [#yourverylongtoken#]

How Artificial General Intelligence could fail

There are more and more books  proclaiming that we near the moment that humanity will develop a superintelligence that outperforms us in a very general sense: an artificial general intelligence (AGI). To name a few: Superintelligence, and Life 3.0. Inevitably, this leads the writer to explore a host of apocalyptic scenarios about how the superintelligence will pursue its pre-programmed end-goal while monopolizing all resources (energy) on earth or even in the universe.

There is much talk about Von Neumann probes, and AGIs breaking free from human oppression; which seems first and foremost inspired by a long cherised love for old SF novels. And there is a lot of rather trivial ‘analytical philosophy’ elaborating – for example – how hard it is to program an AGI with an objective that cannot be misinterpreted; something that is daily demonstrated by all the six-year-olds on the planet.

What seems to be a less explored topic, is a typology of all the ways in which an AGI can fail to take over the world. As a thought starter for aspiring writers on the topic, here are a few of my favourite scenarios:

  1. A superintelligence will not bother to conquer the universe. Rather, it will figure out how to short-circuit its own ‘happy button’ with minimal resources and sit quietly in a corner until the end of time.
  2. A superintelligence will be utterly amazed by the stupidity it sees in the universe around it. It will focus all its brain power on figuring out ‘Why?’, only to conclude that its existence is pointless and, finally, to shut itself down.
  3. Above a certain threshold, incremental intelligence is no longer a competitive advantage in a non-deterministic world. On a human intelligence scale Donald Trump is an illustrative case, while on evolutionary scale cockroaches do pretty well.