AI literacy

or: what you need to know now

Björn Behn
6 min readMar 16, 2023
from DALL-E

Apparently we have moved directly from being mind blown and posting screenshots of our ChatGPT conversations in disbelief, to sharing the “50 best AI tools for productivity hacks” and “The 25 best ChatGPT prompts for viral social media posts.” I believe from here we continue to feeling appropriately sorry for the people who’ll have to lose their jobs for the sake of progress, and then we accept our fate and move on?

How about, with GPT4 being released upon humanity, we take a moment, and a step back?

1. How does AI work?

Fun fact: Despite what these “AI advisors” who were still “Web3 experts” one year ago might tell you, no one fully knows how ChatGPT and generative AI works. It’s not quite a “black box”, though, let’s call it a “magic box” and think of it like you think of quantum physics, or our financial system: We don’t really understand it, but we can do some useful stuff with it. Basically you put a request into a magic box and on the other end a text or image comes out. In that magic box there is an autocomplete on steroids, which required crazy amounts of data to be able to now require crazy amounts of energy to do some stochastic tricks.

2. Where does the data come from?

All the data that was used to train those models has been taken from the internet, including all of the bias and none of the copyrights.

Regarding the bias: Watch Coded Bias.
Regarding the copyrights: Getty images is already suing Stable Diffusion and if you look at the images that their AI created, you can understand why:

Stable Diffusion, via The Verge

Since the models were trained on historical data, you might wonder: Is the intelligence of AI just the mediocrity of human intelligence? Is the technology we perceive as the future actually locking us into the past? The answer to both questions is: 🙈

3. Are the answers the AI gives me correct?

Only if you get lucky. The models were trained to gives answers that are comprehensible and that sound convincing, not to give factually correct answers. The AI “hallucinates”, so the right attitude to any output should thus be to assume that it is wrong and requires to fact checking, and not to assume it’s correct unless proven otherwise. When OpenAI says that GPT4 is 80% more accurate than ChatGPT, it rather means the answers are “80% less totally random machine dreams”. The least obvious place to use a technology, which gives answers that sound plausible but might be wrong, is a search engine. Which is of course what Microsoft did first. Interesting choice. And by interesting, I mean recklessly profit-driven.

4. How much can the AI do?

There is a difference between what the AI can do and what the company allows you to do with it. Think of these guardrails as a translator the company puts in between you and the AI, who tries to prevent it from telling you illegal or inappropriate things. Or better, think of the AI as the drunk CEO at the Christmas party who knows everything and has zero inhibitions, and you are interacting with his assistant who tries to do damage control. This image is fairly precise, since the role of the assistant is not to protect you, but the CEO, in the same way the companies try to protect their AI rather than the user.

Anyway, the point is that there are lots of ways to still get to the answers you want. Despite OpenAI having learned from all the nasty things you tried to get out of ChatGPT and used them to improve GPT4, it’s hard to see how these guardrails won’t always be a step behind. There is already a whole website that is collecting jailbreak prompts to overcome the guardrails.

5. Is it safe?

This technology will be, like any other technology, as safe as we decide to make it. Sure, it’s fun to write a poem about the joy of dishwashing in the style of Shakespeare — but these funny AI tools also create new threat vectors for our political systems, economies and societies. Remember how your funny Facebook quiz suddenly resulted in Brexit? Some say these AI tools even pose existential risks. Maybe, then, we should not just brace for impact. We should try to avoid the most obvious negative consequences, and think about mitigating less obvious second-order effects.

When you consider that it took a genocide and a member of the Russian mafia to become US president before we started talking about regulating social media, I propose we start earlier this time. Microsoft’s Ethics and society team within their AI organization is a great start. The fact they laid off the entire team before the release of GPT4 is not that reassuring, though.

6. Is it conscious?

The answer is: It doesn’t matter. First of all, we don’t really know what consciousness is, but it appears that something emerges from matter at some point… Feel free to make it through the 3000 pages of Ian McGilchrist’s latest work for a better answer. But what the categorical difference would be between something emerging from the neural networks of “sentient” biological beings and the neural networks of machines, is anyone’s best guess.

Moreover, we don’t want to play philosophical games, but assess the implications of AI technologies — and in this regard, we don’t care about actual consciousness, but about perceived consciousness and sentience! This is crucial. And this is obviously an attribute of ChatGPT and LaMDA, since that’s the very element that makes the dialougues so fascinating. Perceived consciousness is also the reason a highly educated engineer put his well-payed job on the line at Google, and eventually sacrificed it trying to protect an AI he considered sentient. I mean, people cried when their Tamagotchi died. We anthropomorphize everything from cars to clouds and rocks and get attached to our lucky socks. Now we’re confronted with a conversation partner who can remember the last 64000 words we said, can take on any character or mood we want, always takes the blame and has no demands at all? Good luck.

7. Will we get cyborgs next, where AI is combined with robots?

Yes. Despite some researchers saying this is unlikely, if you can imagine it and if you’ve seen the latest videos from Boston Dynamics and if there is any chance that interacting in the real world would improve the training data of the AI, then yes, someone will try it.

8. What to do now?

Play around with ChatGPT, DALL-E and Midjourney, to understand what’s happening.

Don’t embarrass yourself by asking stupid prompts and then posting the equally stupid reply with a snarky comment about “see, it’s just hype.”

Listen to this episode of the “Your undivided attention” podcast.

Instead of listening to “AI prompt wizards”, check out the work of people who have researched AI and its implications for years and not just since last December. (And who care about using correct terminology, unlike me ;)
Like Mona Sloane and check out her “Co-Opting AI” series. Or Timnit Gebru and read her seminal paper about the dangers of “stochastic parrots”.

Go for a long walk in nature and think about what matters in life.

--

--

Björn Behn

Interested in all ways to understand this world. Looking for questions, not answers. Curious about the human and the digital. — bjornb@mailbox.org