Claire Maingon and Hélène Rochette – Le grand guide de la Normandie (in French)
Charming take on a tourist guide, revisiting the favorite spots of impressionist painters to recreate their magic.
Claire Maingon and Hélène Rochette – Le grand guide de la Normandie (in French)
Charming take on a tourist guide, revisiting the favorite spots of impressionist painters to recreate their magic.
Kate Fox – Watching the English
Light read with amusing observation, stretched out over slightly more pages than necessary to convey the message.
Great to see journalists initiating change in their own organization.
As I have noted earlier, data access is a major topic when it comes to achieving a healthy power balance in the information space here and here. Glad to see more and more companies take this seriously.
Personally, I currently see little incentives for companies, organization, or individuals to allow their data to be crawled by for profit.
Theo Mulder – De hersenverzamelaar (The brain collector, read in Dutch)
The book is mostly written from the historical perspective free from contemporary judgements, which allows the writer to tell a nuanced story on a sensitive topic.
A brave attempt to put up a framework for assessing technological innovations, that is rich of ideas, which are in many cases [in 2023] still relevant (e.g. Cognifying in the light of GenAI), but sometimes feel out-dated (e.g. Sharing is a post-truth world).
The author underplays the role of religious power structures in suppressing novel scientific ideas that go against traditionalist dogmas, which makes the book read more like a christian apology than a balanced historical narrative.
David Abulafia – The great sea
The best parts are the details (e.g. on laws governing responsibilities at sea in medieval times), but these facts buried in a thorough, impressively complete historical overview.
U.S. president Bidenś recently announced AI commitments agreed by US government with Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
Fun fact: Meta unleashed its Llama2 (even though there are questions on its openness) just before committing to protecting proprietary and unreleased model weights.
In any case; the USA has a totally different approach from the EU with their AI Act. These commitments provide a great opportunity to do an early check on how self-regulation in AI could shape-up.
There are three observations that stand out.
Vagueness
It has already been observed that most of said ‘commitments’ made by big tech are vague, generally non-committal, or confirmation of current practices.
Considering the success of the EU in getting big tech to change (e.g. GDPR, USB-C) I am convinced that in tech, strong legislation does not stifle creativity and innovation; but fuels it.
Data void
There are also notable omissions. The one that sticks out for me is the lack of commitment with respect to training data. And that at a moment that legal cases over data theft and copyright infringement are popping up in various places. In that context, Getty Images hopes that training on licensed content will become a thing.
Admittedly, discussions on data ownership are super interesting. But full clarity on the data going into foundational models (and the policies around it) would also sharpen the extent to which data biases may put model fairness and ethics at risk.
Content validation
By far the most interesting commitment is around identification of AI-generated content:
“The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system“
Considering the expected amount of generated content, I expect not watermarking of AI-generated content (the vast majority of future data volumes) will be problematic.
And it also addresses the problem from the wrong side. In the future, the question will not be “What is fake?”, but rather “What is real?”
This points in the direction of watermarking of human-produced content to be the way forward. Think of an NFT for every photo you make with your smartphone of digital camera. I didn´t hear Apple, Samsung, or Nikon about this yet. But I wouldn´t be surprised if we see announcements in the near future.
This weekend I took Tarot as a little test case for the OpenAI API.
It turned into a quick lesson in how not to use Chat-GPT (in so many ways 🙂 ).
Fortune telling by ‘reading the cards’ seemed like a good use-case for generative AI: ChatGPT can draw random cards and explain their meaning in the convincing tone of a Tarot Zealot.
I had to tweak the prompt a bit when Chat-GPT explained in a condesending tone that it was an AI and could not actually ‘draw cards’. But after that, the whole fortune telling business could be swiftly disrupted in a few lines of code.
For example:
Card 1 is 'The Wheel of Fortune'
Card 2 is 'The Moon'
Card 3 is 'The Star'
Wait one minute for your fortune teller to give her explanation.
She has to align with higher powers on the interpretation of these cards...
Hear your fortune:
Ah, my dear client, what an interesting trio of cards you have drawn! The Wheel
of Fortune represents life's constant changes and cycles, indicating that a
major turning point is approaching. The Moon signifies a time of uncertainty,
where emotions and intuition guide your path. However, fear not, for The Star
brings hope and inspiration, shining light on your journey. This whimsical
combination suggests that you should prepare for unexpected shifts, trust your
instincts, and stay optimistic, as ultimately, a bright and promising future
lies ahead. Remember, my friend, laughter is the best crystal ball gazer!
But after trying my “Tarbot” a couple of times, an odd pattern emerged: most of the ‘random’ cards were from the major arcana. That is a special category of 22 of the 78 Tarot cards that is considered more interesting, deep, and complex by Tarot aficionados.
In a longer experiment, I got to no less than 76% major arcana. A true random draw whould have resulted in c. 28%.
Ouch! How is that for a bias towards what seems interesting?
Mariana Mazzucato and Rosie Collington – The big con
The book paints a naive caricature of the consulting industry, downplays the role and responsibility of other actors and, unfortunately, lacks a realistic alternative for flexibly solving skill and capacity deficits (especially in the public sector); thereby undermining any justified concerns.
Jamie Kreiner – The wandering mind
The book loses a lot of specificity and power due to the suppression of differences in denomination and gender and even more because the writer does not really seem to have a clear point to make.
Balaji Srinivasan – The Network state
Some fair nuggets of socio-economical diagnosis mixed with personal pet-peeves and drained in a techno-utopian rant.
Eben Hewitt – Technology Strategy Patterns
The ‘cookbook’ approach does a lot to demystify Strategy and Architecture, while the digressions into philosophy make the relatively basic content also palatable for the advanced reader.
The joke about OpenAI having to rebrand to ClosedAI (triggered by the secrecy around its GPT-4 unveiling) is pretty apt. All in the spirit of what the VC community calls ‘creating a moat’.
The involuntary openness of Meta on its large language model, LLaMA, got another giant, Google, thinking. Their conclusion is that, in the end, a proprietary model will not create competitive differentiation, as is clear from a leaked memo.
In the memo, Google seems to embrace open source as route forward for generative AI: smaller models, different approaches to fine-tuning, leveraging the crowd, etc. Sounds swell.
The funny thing is, however, that around the same time word came out that Google intends to share AI research less freely. How to reconcile these two perspectives? The memo give some clear pointers.
Working hypothesis: Google will try to make actively orchestrate the open source efforts on LLMs through controlled releases of models and research that enable incremental improvements. Meanwhile, Google will increasingly shield their cutting-edge research as they are really frustrated that OpenAI became a massive success leveraging fundamental research on transformers that originated at Google.
Adding that all up, it seems that the grip of ‘big tech’ on AI will not be challenged by open source anytime soon. Curious how this will play out.
Reed Hastings and Erin Meyer – No Rules Rules
Pretty strong boundary conditions need to be fulfilled in order for this scheme to work; including broad acceptance of a high level of interpersonal ruthlessness.
Nov. 2017: Interesting exploration of the implications of AGI, faulted by the typical preference of Analytical Philosophy for construction of intricate, highly theoretical scenario’s, under-emphasizing basic challenges (in the case of AGI: lack of robustness / antifragility).
Jun. 2023: The writer has leveraged the recent rise of LLMs like ChatGPT to further fuel fear about an AGI break-out – even though other AI-related risks require more imminent attention.
Katie Mack – The end of everything
Highly entertaining take on building a rudimentary astrophysics.
Following the launch of GPT-4, the silicon valley elite started hyping the AI-scare in an open letter.
The progress in LLMs and similar generative AI models is impressive and will have major impact on both society and the enterprise. But fearmongering is totally unhelpful and obscures the real issues.
Rather than naively stopping AI-development, society should focus on two more specific topics that are under-emphasized in the recent public debate:
The key question to address: “How can we use existing legislation to protect society from misuse of AI and what additional legislation is needed?” Sounds less lofty that what the open letter calls for, but is much more constructive. Moreover, this perspective calls into question not just new AI models ‘more powerful than GPT-4’, but also existing models and the governance applied to them.
Already far before the recent open letter was written, the EU published the AI Act to address AI-related risks. Brought to you by the same institution that forced Apple to adopt compatible charging cables. It’s not perfect. It’s not complete. But it is a good start. It would have been so nice if the writers of the open letter had give credit where it is due.
When it comes to protecting my rights, security, and safety as a citizen; I put much more trust in EU bureaucrats than in the silicon valley echo-chamber that tends to over-index on libertarianism and techno-utopianism.
Lucy Worsely – Agatha Christie
The book over-indexes a bit on the domestic context, which does not help in de-mystifying the genius of its subject.
Susan Magsamen and Ivy Ross – Your brain on art
Interesting to read how advances in brain science lead to confirmation of intuitive but traditionally hard-to-prove hypotheses.