A brave attempt to put up a framework for assessing technological innovations, that is rich of ideas, which are in many cases [in 2023] still relevant (e.g. Cognifying in the light of GenAI), but sometimes feel out-dated (e.g. Sharing is a post-truth world).
AI commitments and gaps

U.S. president Bidenś recently announced AI commitments agreed by US government with Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
Fun fact: Meta unleashed its Llama2 (even though there are questions on its openness) just before committing to protecting proprietary and unreleased model weights.
In any case; the USA has a totally different approach from the EU with their AI Act. These commitments provide a great opportunity to do an early check on how self-regulation in AI could shape-up.
There are three observations that stand out.
Vagueness
It has already been observed that most of said ‘commitments’ made by big tech are vague, generally non-committal, or confirmation of current practices.
Considering the success of the EU in getting big tech to change (e.g. GDPR, USB-C) I am convinced that in tech, strong legislation does not stifle creativity and innovation; but fuels it.
Data void
There are also notable omissions. The one that sticks out for me is the lack of commitment with respect to training data. And that at a moment that legal cases over data theft and copyright infringement are popping up in various places. In that context, Getty Images hopes that training on licensed content will become a thing.
Admittedly, discussions on data ownership are super interesting. But full clarity on the data going into foundational models (and the policies around it) would also sharpen the extent to which data biases may put model fairness and ethics at risk.
Content validation
By far the most interesting commitment is around identification of AI-generated content:
“The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system“
Considering the expected amount of generated content, I expect not watermarking of AI-generated content (the vast majority of future data volumes) will be problematic.
And it also addresses the problem from the wrong side. In the future, the question will not be “What is fake?”, but rather “What is real?”
This points in the direction of watermarking of human-produced content to be the way forward. Think of an NFT for every photo you make with your smartphone of digital camera. I didn´t hear Apple, Samsung, or Nikon about this yet. But I wouldn´t be surprised if we see announcements in the near future.
There are a huge number of ways in which Artificial General Intelligence (AGI) can take over the world, rendering humanity essentially useless
Nov. 2017: Interesting exploration of the implications of AGI, faulted by the typical preference of Analytical Philosophy for construction of intricate, highly theoretical scenario’s, under-emphasizing basic challenges (in the case of AGI: lack of robustness / antifragility).
Jun. 2023: The writer has leveraged the recent rise of LLMs like ChatGPT to further fuel fear about an AGI break-out – even though other AI-related risks require more imminent attention.
The careful study of ancient wrecks reveals much about how – through the ages and across civilizations – engineers have solved for the same challenges in different ways
Richard Steffy – Wooden Ship Building and the Interpretation of Shipwrecks
It would be worthwhile research topic to map the development of ship building to the principles of disruptive innovation as laid out by Clayton Christensen.
Ongoing advances in technology make that ethical norms develop incredibly fast
Filled with highly interesting statistics about the evolution of public perception on ethical issues.
The philosophy of the Silicon Valley elite is just a bunch of ill understood one-liners from preferably obscure thinkers
Adrian Daub – What tech calls thinking
Entertaining and polemic book, although many of the author’s points hardly need to be argued.
Marginal communities provide natural experiments that convincingly illustrate basic economical mechanisms
Richard Davies – Extreme Economies
Well chosen examples (prisons, refugee camps, declining cities, etc.) illustrate why economics is a social science
Natural philosophy transformed into science thanks to the commitment and sacrifice of some thrill-seeking geeks
Richard Holmes – The age of wonder
Conveys lively how science was considered an undertaking for daring adventurers.
The Silicon Valley philosophy of innovation and disruption undervalues the importance of maintenance and durability
Lee Vinsel, Andrew Russel – The innovation delusion
Funny enough, the polemic narrative applies all the trick of typical innovation literature to promote a maintenance mindset.
To become successful as a startup founder: copy everything you can and only invent what you must
JimMcKelvey – The innovation stack
The book is exactly what it tries to avoid: being just another entertaining founder story (in this case about Square).
The US is losing out in AI, due to a lack of long-term vision and direction
The book’s set-up with multiple scenarios for the future works surprisingly well and is especiall concerning for European readers: Europe is almost completely irrelevant in all of Webb’s scenarios.
To unlock creativity, make sure you get the culture right
The best quote is not from the author: “Quality is the best business plan” (John Lasseter, director of Toy Story).
Thanks to the US phone monopoly, Bell labs could produce breakthrough technologies
Jon Gertner – The idea factory
The fascinating history of Bell labs illustrates how a long-term view is essential for technological progress.
Nine out of ten times, what seems to be a human error is actually caused by a faulty design
Don Norman – The design of everyday things
Elegant book full of fascinating examples of design thinking.
Economically speaking, AI makes prediction a commodity – and nothing more
Ajay Argawal, Joshua Gans, Avi Goldfarb – Prediction machines
The authors see AI as just a new option for the division of labor which, although it can have rather dramatic consequences, does not support apocalyptic GAI fearmongering.
Running a socially responsible business is often just egotism in disguise
Anad Giridharadas – Winners take all
Giridharas key argument is that elites only support change to the point where their privilege is not endangered.
Facebook’s content strategy leads to filter bubbles, thereby destroying the cohesion in society
When a a big tech investor like McNamee argues for stricter regulation it makes the argument more convincing.
AI outcomes reflect the thinking of the technochauvinists that built it – which may not be desirable for society
Meridith Broussard – Artificial Unintelligence
Great effort to democratize AI and peel off some layers of mistique that harm public debate (althought the case against technochauvinism seems at times a bit too shallow).
As an incumbent, organize disruptive innovation away from your core business
Clayton Christensen – The innovator’s dilemma
The history of disc drives and mechanical excavators showcases how difficult it is for incumbents to come out on top when technological innovation hits your market.
Make sure you create value, and maintain power over transactions on your platform
Geoffrey Parker, Marshall Van Olstyne, Sangeet Choudary – Platform revolution
Remember: there are many ways in which platforms can fail!