Open is the new closed

Image by Stable Diffusion: “Silicon valley giant trying to have its cake and eat it”

The joke about OpenAI having to rebrand to ClosedAI (triggered by the secrecy around its GPT-4 unveiling) is pretty apt. All in the spirit of what the VC community calls ‘creating a moat’.

The involuntary openness of Meta on its large language model, LLaMA, got another giant, Google, thinking. Their conclusion is that, in the end, a proprietary model will not create competitive differentiation, as is clear from a leaked memo.

In the memo, Google seems to embrace open source as route forward for generative AI: smaller models, different approaches to fine-tuning, leveraging the crowd, etc. Sounds swell.

The funny thing is, however, that around the same time word came out that Google intends to share AI research less freely. How to reconcile these two perspectives? The memo give some clear pointers.

Working hypothesis: Google will try to make actively orchestrate the open source efforts on LLMs through controlled releases of models and research that enable incremental improvements. Meanwhile, Google will increasingly shield their cutting-edge research as they are really frustrated that OpenAI became a massive success leveraging fundamental research on transformers that originated at Google.

Adding that all up, it seems that the grip of ‘big tech’ on AI will not be challenged by open source anytime soon. Curious how this will play out.