Following the launch of GPT-4, the silicon valley elite started hyping the AI-scare in an open letter.
The progress in LLMs and similar generative AI models is impressive and will have major impact on both society and the enterprise. But fearmongering is totally unhelpful and obscures the real issues.
Rather than naively stopping AI-development, society should focus on two more specific topics that are under-emphasized in the recent public debate:
- What data to train on? Current models aim to train on, roughly, ‘the totality of the internet’. Which already leads it interesting legal challenges about copyright infringement and ownership
- What applications to pursue? The current generation of AI can do amazing party tricks and can lead to major efficiency improvements. But it can also be used for deception and has has severe limitations and biases.
The key question to address: “How can we use existing legislation to protect society from misuse of AI and what additional legislation is needed?” Sounds less lofty that what the open letter calls for, but is much more constructive. Moreover, this perspective calls into question not just new AI models ‘more powerful than GPT-4’, but also existing models and the governance applied to them.
Already far before the recent open letter was written, the EU published the AI Act to address AI-related risks. Brought to you by the same institution that forced Apple to adopt compatible charging cables. It’s not perfect. It’s not complete. But it is a good start. It would have been so nice if the writers of the open letter had give credit where it is due.
When it comes to protecting my rights, security, and safety as a citizen; I put much more trust in EU bureaucrats than in the silicon valley echo-chamber that tends to over-index on libertarianism and techno-utopianism.