Open is the new closed

Image by Stable Diffusion: “Silicon valley giant trying to have its cake and eat it”

The joke about OpenAI having to rebrand to ClosedAI (triggered by the secrecy around its GPT-4 unveiling) is pretty apt. All in the spirit of what the VC community calls ‘creating a moat’.

The involuntary openness of Meta on its large language model, LLaMA, got another giant, Google, thinking. Their conclusion is that, in the end, a proprietary model will not create competitive differentiation, as is clear from a leaked memo.

In the memo, Google seems to embrace open source as route forward for generative AI: smaller models, different approaches to fine-tuning, leveraging the crowd, etc. Sounds swell.

The funny thing is, however, that around the same time word came out that Google intends to share AI research less freely. How to reconcile these two perspectives? The memo give some clear pointers.

Working hypothesis: Google will try to make actively orchestrate the open source efforts on LLMs through controlled releases of models and research that enable incremental improvements. Meanwhile, Google will increasingly shield their cutting-edge research as they are really frustrated that OpenAI became a massive success leveraging fundamental research on transformers that originated at Google.

Adding that all up, it seems that the grip of ‘big tech’ on AI will not be challenged by open source anytime soon. Curious how this will play out.

There are a huge number of ways in which Artificial General Intelligence (AGI) can take over the world, rendering humanity essentially useless

Max Tegmark – Life 3.0

Nov. 2017: Interesting exploration of the implications of AGI, faulted by the typical preference of Analytical Philosophy for construction of intricate, highly  theoretical scenario’s, under-emphasizing basic challenges (in the case of AGI: lack of robustness / antifragility).

Jun. 2023: The writer has leveraged the recent rise of LLMs like ChatGPT to further fuel fear about an AGI break-out – even though other AI-related risks require more imminent attention.

When it comes to regulating AI, I root for the bureaucrats

Image by Deep Dream Generator: “Evil AI conquers Silicon Valley, taking no hostages”

Following the launch of GPT-4, the silicon valley elite started hyping the AI-scare in an open letter.

The progress in LLMs and similar generative AI models is impressive and will have major impact on both society and the enterprise. But fearmongering is totally unhelpful and obscures the real issues.

Rather than naively stopping AI-development, society should focus on two more specific topics that are under-emphasized in the recent public debate:

  1. What data to train on? Current models aim to train on, roughly, ‘the totality of the internet’. Which already leads it interesting legal challenges about copyright infringement and ownership
  2. What applications to pursue? The current generation of AI can do amazing party tricks and can lead to major efficiency improvements. But it can also be used for deception and has has severe limitations and biases.

The key question to address: “How can we use existing legislation to protect society from misuse of AI and what additional legislation is needed?” Sounds less lofty that what the open letter calls for, but is much more constructive. Moreover, this perspective calls into question not just new AI models ‘more powerful than GPT-4’, but also existing models and the governance applied to them.

Already far before the recent open letter was written, the EU published the AI Act to address AI-related risks. Brought to you by the same institution that forced Apple to adopt compatible charging cables. It’s not perfect. It’s not complete. But it is a good start. It would have been so nice if the writers of the open letter had give credit where it is due.

When it comes to protecting my rights, security, and safety as a citizen; I put much more trust in EU bureaucrats than in the silicon valley echo-chamber that tends to over-index on libertarianism and techno-utopianism.

Mixed flour sourdough bread

Ingredients

  • 500g white flour
  • 300g wholewheat flour
  • 100g rye flour
  • 300g starter (120% hydration, activated)
  • 500g water
  • 18g salt

Method

  • Mix flours and water
  • Leave for 30 mins
  • Add starter and salt
  • Knead. In my BOSCH 1600W MaxxiMUM machine c. 9 mins:
    • c.4 mins slow (“speed 1”)
    • c. 5 mins fast (“speed 3”)
  • Let the dough rise for c. 8 hrs
  • Shape (I shape 2 batons from this amounts)
  • Let rise for another 90 minutes- 2 hours
  • Bake in pre-heated oven at 230C for 45 minutes, with cup of water at the bottom and extra moisture from the plant sprayer
    • First 15 mins: put the baking sheet with the bread on the low oven rack, with another baking sheet on top
    • After that, remove the top baking sheet and move the bread to middle rack for the final 30 minutes

Inspired by Das Brot

  • I use a bit more wholewheat compared to Das Brot.