Ethan Mollick – Cointelligence
Not wrong, but misses true depth and is overly-reliant on the author’s conversations with Chat-GPT.
Ethan Mollick – Cointelligence
Not wrong, but misses true depth and is overly-reliant on the author’s conversations with Chat-GPT.
Mustafa Suleyman – The coming wave
In the light of the message of the book, the writer’s move to join Microsoft as AI chief in early 2024 was surprising.
It’s always tricky… claiming to be comprehensive. In particular where it concerns LLMs.
And that;s where the paper Decoding Trust [..] stumbles. Right in the title is claims “A Comprehensive Assessment of Trustworthiness in GPT.” Nonetheless, when reading about this research on one of my favorite blogs, I decided to have a closer look.
The authors propose a framework with eight perspectives on trustworthiness:
They then continue to develop that into a benchmark for GPT models and present the empirical results on GPT-3.5 and GPT-4.
Although the results are interesting, there are some concerns with this type of benchmark approach.
On the positive side, the paper brings a lot of inspiration for organizations for how they can shape their own testing approach for trustworthy GenAI. Even if not comprehensive, a framework like this as a starting point is massively useful and important.
Great to see journalists initiating change in their own organization.
As I have noted earlier, data access is a major topic when it comes to achieving a healthy power balance in the information space here and here. Glad to see more and more companies take this seriously.
Personally, I currently see little incentives for companies, organization, or individuals to allow their data to be crawled by for profit.
U.S. president Bidenś recently announced AI commitments agreed by US government with Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
Fun fact: Meta unleashed its Llama2 (even though there are questions on its openness) just before committing to protecting proprietary and unreleased model weights.
In any case; the USA has a totally different approach from the EU with their AI Act. These commitments provide a great opportunity to do an early check on how self-regulation in AI could shape-up.
There are three observations that stand out.
Vagueness
It has already been observed that most of said ‘commitments’ made by big tech are vague, generally non-committal, or confirmation of current practices.
Considering the success of the EU in getting big tech to change (e.g. GDPR, USB-C) I am convinced that in tech, strong legislation does not stifle creativity and innovation; but fuels it.
Data void
There are also notable omissions. The one that sticks out for me is the lack of commitment with respect to training data. And that at a moment that legal cases over data theft and copyright infringement are popping up in various places. In that context, Getty Images hopes that training on licensed content will become a thing.
Admittedly, discussions on data ownership are super interesting. But full clarity on the data going into foundational models (and the policies around it) would also sharpen the extent to which data biases may put model fairness and ethics at risk.
Content validation
By far the most interesting commitment is around identification of AI-generated content:
“The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system“
Considering the expected amount of generated content, I expect not watermarking of AI-generated content (the vast majority of future data volumes) will be problematic.
And it also addresses the problem from the wrong side. In the future, the question will not be “What is fake?”, but rather “What is real?”
This points in the direction of watermarking of human-produced content to be the way forward. Think of an NFT for every photo you make with your smartphone of digital camera. I didn´t hear Apple, Samsung, or Nikon about this yet. But I wouldn´t be surprised if we see announcements in the near future.
This weekend I took Tarot as a little test case for the OpenAI API.
It turned into a quick lesson in how not to use Chat-GPT (in so many ways 🙂 ).
Fortune telling by ‘reading the cards’ seemed like a good use-case for generative AI: ChatGPT can draw random cards and explain their meaning in the convincing tone of a Tarot Zealot.
I had to tweak the prompt a bit when Chat-GPT explained in a condesending tone that it was an AI and could not actually ‘draw cards’. But after that, the whole fortune telling business could be swiftly disrupted in a few lines of code.
For example:
Card 1 is 'The Wheel of Fortune'
Card 2 is 'The Moon'
Card 3 is 'The Star'
Wait one minute for your fortune teller to give her explanation.
She has to align with higher powers on the interpretation of these cards...
Hear your fortune:
Ah, my dear client, what an interesting trio of cards you have drawn! The Wheel
of Fortune represents life's constant changes and cycles, indicating that a
major turning point is approaching. The Moon signifies a time of uncertainty,
where emotions and intuition guide your path. However, fear not, for The Star
brings hope and inspiration, shining light on your journey. This whimsical
combination suggests that you should prepare for unexpected shifts, trust your
instincts, and stay optimistic, as ultimately, a bright and promising future
lies ahead. Remember, my friend, laughter is the best crystal ball gazer!
But after trying my “Tarbot” a couple of times, an odd pattern emerged: most of the ‘random’ cards were from the major arcana. That is a special category of 22 of the 78 Tarot cards that is considered more interesting, deep, and complex by Tarot aficionados.
In a longer experiment, I got to no less than 76% major arcana. A true random draw whould have resulted in c. 28%.
Ouch! How is that for a bias towards what seems interesting?