Jamie Kreiner – The wandering mind
The book loses a lot of specificity and power due to the suppression of differences in denomination and gender and even more because the writer does not really seem to have a clear point to make.
Jamie Kreiner – The wandering mind
The book loses a lot of specificity and power due to the suppression of differences in denomination and gender and even more because the writer does not really seem to have a clear point to make.
Balaji Srinivasan – The Network state
Some fair nuggets of socio-economical diagnosis mixed with personal pet-peeves and drained in a techno-utopian rant.
Eben Hewitt – Technology Strategy Patterns
The ‘cookbook’ approach does a lot to demystify Strategy and Architecture, while the digressions into philosophy make the relatively basic content also palatable for the advanced reader.
The joke about OpenAI having to rebrand to ClosedAI (triggered by the secrecy around its GPT-4 unveiling) is pretty apt. All in the spirit of what the VC community calls ‘creating a moat’.
The involuntary openness of Meta on its large language model, LLaMA, got another giant, Google, thinking. Their conclusion is that, in the end, a proprietary model will not create competitive differentiation, as is clear from a leaked memo.
In the memo, Google seems to embrace open source as route forward for generative AI: smaller models, different approaches to fine-tuning, leveraging the crowd, etc. Sounds swell.
The funny thing is, however, that around the same time word came out that Google intends to share AI research less freely. How to reconcile these two perspectives? The memo give some clear pointers.
Working hypothesis: Google will try to make actively orchestrate the open source efforts on LLMs through controlled releases of models and research that enable incremental improvements. Meanwhile, Google will increasingly shield their cutting-edge research as they are really frustrated that OpenAI became a massive success leveraging fundamental research on transformers that originated at Google.
Adding that all up, it seems that the grip of ‘big tech’ on AI will not be challenged by open source anytime soon. Curious how this will play out.
Reed Hastings and Erin Meyer – No Rules Rules
Pretty strong boundary conditions need to be fulfilled in order for this scheme to work; including broad acceptance of a high level of interpersonal ruthlessness.
Nov. 2017: Interesting exploration of the implications of AGI, faulted by the typical preference of Analytical Philosophy for construction of intricate, highly theoretical scenario’s, under-emphasizing basic challenges (in the case of AGI: lack of robustness / antifragility).
Jun. 2023: The writer has leveraged the recent rise of LLMs like ChatGPT to further fuel fear about an AGI break-out – even though other AI-related risks require more imminent attention.
Katie Mack – The end of everything
Highly entertaining take on building a rudimentary astrophysics.
Following the launch of GPT-4, the silicon valley elite started hyping the AI-scare in an open letter.
The progress in LLMs and similar generative AI models is impressive and will have major impact on both society and the enterprise. But fearmongering is totally unhelpful and obscures the real issues.
Rather than naively stopping AI-development, society should focus on two more specific topics that are under-emphasized in the recent public debate:
The key question to address: “How can we use existing legislation to protect society from misuse of AI and what additional legislation is needed?” Sounds less lofty that what the open letter calls for, but is much more constructive. Moreover, this perspective calls into question not just new AI models ‘more powerful than GPT-4’, but also existing models and the governance applied to them.
Already far before the recent open letter was written, the EU published the AI Act to address AI-related risks. Brought to you by the same institution that forced Apple to adopt compatible charging cables. It’s not perfect. It’s not complete. But it is a good start. It would have been so nice if the writers of the open letter had give credit where it is due.
When it comes to protecting my rights, security, and safety as a citizen; I put much more trust in EU bureaucrats than in the silicon valley echo-chamber that tends to over-index on libertarianism and techno-utopianism.
Lucy Worsely – Agatha Christie
The book over-indexes a bit on the domestic context, which does not help in de-mystifying the genius of its subject.
Susan Magsamen and Ivy Ross – Your brain on art
Interesting to read how advances in brain science lead to confirmation of intuitive but traditionally hard-to-prove hypotheses.
Eliot Higgins – We are Bellingcat
The book raises the question what happens if online sleuth methods are applied for profit maximization rather than for truth seeking.
The vocabulary of ‘sims’, and ‘VR’ makes for entertaining examples of traditional philosophical concepts; but the author’s core arguments about simulation and physical reality seem to implicitly assume a suspicious form of Cartesian dualism.
Richard Feyneman – Surely you’re joking Mr. Feynman
Not all anecdotes have aged well but there are enough gems to make the book worthwhile.
A well-written account of the history of quantum physics in the wake of the Bohr v. Einstein controversy.
The ‘it is all about oil’ narrative of international politics over the last 20 years made explicit is a comprehensive yet digestible form.
Sabine Hossenfelder – Existential Physics
Elegant combination of depth, playful curiosity and humbleness.
Paolo Zellini – The Mathematics of the Gods and the Algorithms of Men
Guided tour through the philosophy of mathematics, seldomly deviating from the expected and missing in-depth reflection on the role of data science in this regard.
Dipo Faloyin – Africa is not a country
Well known story told in a fresh style, which unfortunately still serves a purpose.
Katherine Eban – Bottle of lies
Impressive and concerning whistleblower story illustrating the subtleties in developing and producing effective generic drugs.
Piethein Strengholt – Data management at scale
Thorough expose that goes through a lot, over-indexing on the architecture side.