Several of my fellow bloggers have been experimenting with AI, such as Neil having AI write poems or create book covers, or Monica illustrating her posts with AI-generated images.
It's all good fun and often shows us what AI can do - and where it struggles.
Some commenters have wondered why, for instance, AI often spells words wrongly in images, or has difficulties picturing hands correctly; others have said that they find AI frightening. Let me try and explain a few things about AI as I understand it; maybe I'll be able to dispel a few myths and alleviate fears.
Please keep in mind that I am by no means an expert on the matter. I have merely been reading up on it and attending a few webinars on various aspects of AI in my line of work (privacy/data protection, mostly in the insurance industry).
What is AI?
There are various definitions of the term Artificial Intelligence, but the one most people will agree on can be found on wikipedia: „Artificial intelligence, as opposed to the intelligence of living beings, primarily of humans."
That is of course a very broad definition and can mean a lot of things in different context, but let it suffice for now.
What is AI really?
For one thing, the term itself - the "I" of it - is misleading. Software or a machine can never be intelligent the way a human being or some animals can be. But a well programmed software and a well constructed machine can "learn", true enough.
Things to keep in mind:
When you are confronted with AI - be it in your line of work, because you are subject to it when dealing with a company, or while you're having fun with an image creator - keeping a few things in mind is useful.
Since this post is becoming much longer than I anticipated, I have put my key statements together here; if you don't have time or can't be bothered to read the rest, that's fine.
1) AI can only be as good as its programming.
2) AI is often biased - again, because of its programming.
3) AI can't autonomously decide to do or learn something it was not programmed for.
4) Good prompting brings good results.
5) AI can not discern between truth and fake, good and bad.
6) It's not AI we need to worry about - it's who uses it, and how.
1) How good AI is depends on two key factors: The database (the "model") it draws on, and the algorithms originally programmed into it (on which base it usually - but not always - can develop new algorithms, hence the "learning" effect). The adage "garbage in, garbage out" is not only true for readers of certain newspapers and viewers of certain TV channels, but also for AI.
Developers of AI have a scope in mind: What is their particular AI supposed to do? Is it generative (generating text, sound - such as speech - or images that did not previously exist) or static (making decisions based on certain criteria, such as "a person over 50 is offered tariff X for life insurance, a person under 50 is offered tariff Y")?
The AI in the first example "learns" with every bit of text and every image it creates. It re-feeds its own database every time it is asked to write something or create a picture. For instance, the poem Neil had AI create recently is now part of that AI's database, which means when the next person asks for a poem with the same prompt (more about prompts later), the AI can draw on its own archive. That doesn't necessarily mean it will produce the same lines as it did for Neil, but there will most likely be similarities.
The AI in the second example doesn't "learn" - it simply repeats what it was programmed to do, and every time its decision will be the same when the criteria are the same. The person over 50 will never be offered tariff Y, and someone under 50 never tariff X.
Maybe at some stage during your school years you learned about algorithms and probabilites. Even if you didn't, the basics are not that compliacted, and you can find good explanations on wikipedia. Let's have a look at them with an example most of us can relate to:
You look for a book in an online shop with recipes for Yorkshire pudding. You find one and order it; once it's been delivered to you, you surprise your loved ones with a great meal, and they love you more than ever.
Next time you visit that shop (or even while you're still on the site, completing your order), you see recommendations. These are based on algorithms. A reasonably well programmed shop will show you more cook books, maybe with recipes for other British culinary classics, or of typical Yorkshire dishes. A less well programmed shop will show you books about Yorkshire terriers; that's because the algorithm does not take into account that the dish and the dog are not directly related; it just sees that they contain the same word. To work well, the algorithm needs to have access to the data base of the shop's products - all of them.
For the probability bit about such recommendations, the algorithm needs access to the shop's sales history. If a certain number of customers have bought not only the same book you have just ordered, but also a biography of Marilyn Monroe, that biography will now be recommended to you - even if the two books have nothing in common. But how probable is it that a significant number of customers have bought both books together? Recommendations based on high probability will be more likely to work for you. Now, if many who buys the cook book get the MM biography recommended AND the two are offered at a special bundle price, more customers will buy the bundle - and the probability of purchase rises, leading to more recommendations, leading to more purchases, further increasing the probability.
See what I mean?
Developers of AI would love to be able to feed "the real world" into their AI. But with all the data in the world, we can not re-create the real world in a database - not really. It is always only an incomplete model of reality, based on available data. Think of a map - it represents a certain town or area, but it does not recreate it - it's only a two-dimensional model of it, and a static one at that, not able to reproduce changes, not giving you an idea about the sounds and smells of the place.
To develop a model for generative AI, a HUGE database is needed. For static AI, the requirements are less demanding.
If your AI is supposed to be able to generate a picture of a dog in the snow, it must have many images of dogs and of snow first - it can't "know" what dogs are and what snow looks like unless it is "taught" about them. Give your AI pictures of 100 different kinds of dog, and it will be able to generate a vast variety of dog images - much more than the 100 you fed it with. Give AI only 10 pictures, it will still produce many different dog portraits for you, but not as sophisticated as the better fed AI.
2) Let's not forget about the bias. AI doesn't think and has no emotions; it being biased is not its fault, but that of the developers responsible for selecting its underlying database.
Here is an example I was told about last week at a webinar: In order to "learn" to recognise horses, an AI's database was fed with a large number of photos of horses in all shapes, sizes and colours; some were standing, others jumping, running or resting on the ground etc. When testing their AI, the developers found that it recognised horses alright, but it also classified as horses many other animals, people and objects that had nothing horse-like about them. It took the developers a while to find out why - it was the copyright sign included in each photograph. The AI had simply matched the part of each picture it recognised easiest, and that was the copyright sign.
Can static AI also be biased? Yes, of course. Let's say a company wants to facilitate predictions about their customers' paying habits, and base their pricing on how likely it is that someone will need several reminders or not pay at all. The AI they use is fed with payment data of hundreds of thousands of anonymous consumers. If payment information is related to post codes, a pattern emerges, showing that consumers from a certain area are notoriously unreliable payers. Based on this, the company offer their products and services to anyone from that area at a higher price, actively discouraging them from buying - and if you happen to live in that area but are a perfectly good customer, you'll suffer from that bias.
3) If you have seen the AI-created book covers on Yorkshire Pudding's blog or some of Dawn Treader's AI-generated pictures, you have noticed that the spelling is often not right. Why is that? You've guessed it - again, it's down to programming.
Generative AI is programmed to generate text, not to calculate numbers. A text-based AI such as ChatGPT (Generated Pre-Trained Transformer) is exactly what it says: A chatbot, designed to "chat" to whoever uses it. At its base is an amount of text so huge I can't get my head round it - books of all kinds (novels as well as non-fiction), internet articles (probably even content from our blogs), scientific publications - you name it, ChatGPT has it. BUT... and here come two "buts":
Chatbots (in general) are meant to communicate using text (yes, I know - GPT-4 has now learned to use visual content, too). They are NOT meant to calculate. If you ask a chatbot "what is 2 + 2?" it will probably answer correctly, but not necessarily so. If, for instance, in its database it finds "2 + 2 = 5", it may give you this result - not because it "believes" it to be true, but because it can not add - it merely repeats what it has found, and its algorithm has determined this as the most probable answer.
Therefore, an image-creating AI was trained to create images, not spell. That it does not even get the spelling right when a correctly spelled word was used to prompt it just goes to show its limits.
4) Generative AI needs prompting; it doesn't act autonomously, waking up in the morning and thinking "Oh, I might write a poem about spring today." A prompt is what you type in to trigger the AI's work.
Some of you have become pretty good at this, and you have probably noticed how you get better results when you are more specific in what you type in. Say, I want AI to create a picture of myself, cartoon-style, wearing a yellow dress. Just typing in "picture of a woman in a dress" will show me just that, but probably looking nothing like me. "Picture of a woman in a yellow dress" will get closer to what I want. "Picture cartoon-style of a woman with short white hair, wearing glasses and a yellow dress" would probably give a more satisfying result.
Adding "blue eyes" to the prompt and using "librarian" instead of "woman", this is what I get - the red nose and pearl studs are the most similar features to the real me :-) |
Can you guess what term I put in for "librarian" in my next prompt to get this result? |
The same goes for text-generating AI: Be concise but specific in your prompting. Avoid superflual words, such as "maybe" or "perhaps" - even adjectives such as "beautiful" are not always a good idea. But the more you experiment with it, the more you'll understand how it works, and get closer to the desired results.
5) We all know that the internet and some of our traditional media are full of fake news. Not everything presented as a fact is necessarily true, and many news reports are biased in some way. But humans know what they're doing when they bend the truth or edit a report to fit their own agenda - AI does NOT.
Ask a text-generating AI to summarise Obama's life for you, and you may - or may not - find that it gives you a mix of fake and true information about the ex-president, such as his nationality and religion.
Therefore, I urge anyone who uses AI to generate text that is meant to contain facts to always cross-check the facts before publishing that text anywhere. As described before, generative AI re-feeds itself on its own results. This means that if your newly generated text contains a false information, that information will be stored in the AI's database and taken as "truth" by the AI - it may use it next time someone prompts it with a similar request. That way, false information is (often unwittingly) repeated across the internet until many people believe it to be true.
6) The above mentioned example of wrong information gives you an idea of what there is to worry about - not AI in itself; it can indeed be highly useful, apart from it providing some of us with hours of fun.
But people use AI to achieve something, and their aims are not always honourable. If biased AI is used - deliberately or unwittingly - to make decisions about who gets what offer, who is observed by the police, who is subject to which precautionary measure, things can get really ugly. If AI is used to spread fake news, things will (and do) get ugly.
This is the post in the history of my blog I have spent the most time on writing. I could go on about this subject for a lot longer, but I know from my own experience that overly long posts are not welcome. Still, I hope some of you feel a bit better equipped now when confronted with AI - and confronted with it we all are, whether we want or know it or not.
I can see why you are so good at your job, Meike, and do presentations for your company. You are thorough with your research and present it all in an easy-to-understand way.
ReplyDeleteI think the people that will misuse AI make it scary for me. The internet has already complicated the truth by making it easier to spread lies and this will just make it worse.
Thank you, Ellen - for reading my longest post ever (so far), and your kind words.
DeleteYou are right about the internet - especially social media - facilitating abuse and bullying, conspiracy theories, fake news and other unpleasant or outright dangerous things. And of course AI can be misused that way, too.
Meike, you make a good and useful summary here - and thanks for taking the time to do so, and sharing it. Basically, I feel that I knew most of it, even if my own "database" is not quite up to date with all the right terminology... ;-) So you may still find me arguing with (or about) the Image Creator from time to time as if it were a live person, even though in reality I'm well aware that it is the human programming behind it that is less than perfect. And what worries me most about AI is indeed its inability to tell whether the data it's been fed are true or false - combined with our own inability to do the same (or our laziness to double-check...)
ReplyDeletePS. Did you ever read Kazuo Ishiguro's "Klara and the Sun"? I highly recommend it. I wrote a review of it back in August 2021 here
DeleteGlad you took the time to read it, Monica - thank you!
DeleteStrangely enough, I do remember your review of Klara and the Sun, but have not commented on it. No, I have not come across it outside your blog, let alone read it, but it sounds interesting for it being so different from most of my reading.
It has sprung to mind for me more than once in recent discussions of AI...
DeleteBtw, did you add "librarian with secrets" to generate that image with the padlocks?
No - I added data protection officer 😊
DeleteAh, I didn't think of that (if I've heard your "title" before, I failed to store it in my memory)...
DeleteI worked with rule-based expert systems in the 80s and never thought they were any good. Self-learning neural networks have a similar appeal today but again I am not convinced. I tried to generate an image to illustrate my post about my aunt's driving test, but the woman pictured did not illustrate the right level of nervousness or the examiner the right degree of grumpiness. It could not create anything truly witty or original. Good illustrators are still best.
ReplyDeleteThe problem, as you say, is that they don't know what they don't know. That is true of people, too, but that is exactly what true knowledge and expertise are, i.e. knowing the limits of what you know. Or as a guitar teacher once told me: it takes ten years before you realise how crap you are.
You know a lot more about the matter than me, Tasker. Thank you for reading and commenting.
DeleteKnowing one’s limits (and then either accepting them or working on them) is sensible - and it is something AI can‘t do.
Not really. We all know different things. AI can access it all, but it is still basically like a parrot.
DeleteIn my class of literature the young ones are using AI apps to generate things that I still use my own head for. For instance this week some said that they had used AI to generate a random list of words for our literary homework. I thought of words for myself. (We had then put the words together for the final written project so they had not done anything wrong). Whilst AI has many uses for the better for us it appears to me to be another way of making us lazy just like calculators did once upon a time. Another told me that she had used AI to rewrite in a different style a sample essay we had been given so that she could understand it. The whole thing frightens me at the moment.
ReplyDeleteAs you say, AI has many uses. For school or at uni, I find it hard to draw a line beyond which it is cheating in my eyes. At work, I prefer to write my own reports, regulations, concepts, presentations and training material, but I have been thinking of using AI to generate pictures for trainings.
DeleteAs for the frightening bit, I am not sure why that is.
DeleteNew regulation on EU level will mean that, just as before, it is not allowed to subject any person to an automated decision of a certain significance without the possibility of human intervention and/or an appeal to revise the AI- made decision.
I think there is always a bit of fear of the unknown with new technology especially as far reaching as AI is likely to be and jobs in some spheres will disappear.
DeletePhew! Well done Meike! You have addressed A.I. questions with clarity and understanding. The phenomenon arrived so recently that many people have not been able to catch their breath. This was a very worthwhile and informative blogpost.
ReplyDeleteThank you, Neil. I enjoyed writing it and am glad it comes across the way I intended it. Now all I need to do is sort out the formatting - when I copied the definition from wikipedia, the HTML formatting of white background was copied along with the words, and I want to get rid of that. It looks daft.
DeleteInteresting post. People have been scared of the onset of automation, computers and now AI. There are still some areas that machines can't do but we must be wary of believing what we are seeing, an example being Princess of Wales recent photo! (written by myself).
ReplyDeleteYou are right, L; whenever new technology is making itself felt, there will be those who embrace it and those who are scared.
DeleteWe need to be able to make informed decisions, and I hope my post informed a handful of readers who weren't really sure what AI is all about.
Meike, what an amazing educational post! I'm totally scared to death by AI but will read this again when I can concentrate better (workmen here several days 'destroying my house' !). I'm ready to run away! You are such a smart woman, Bob and I admire you a lot dear.
ReplyDeleteHugs - Mary
Thank you, Mary!
DeleteHopefully, my post was helpful to alleviate your fears somewhat. I am not easily scared, let alone scared to death, but I do worry - mostly about current situations such as the wars in Ukraine and Gaza, or the next president of the US. AI is not part of my worries and fears, but I want to stay informed about it. Know your enemy, right? :-) (Not that AI is our enemy!)
Hugs, Meike
X
Great overview. AI seems to be moving at the speed of light right now!
ReplyDeleteHello, and welcome to my blog - I don't think we have "met" before.
DeleteThank you for reading and commenting!
Yes, everything is happening very fast right now.
Great insights! I really appreciate how clearly you’ve outlined the topic. Your post has provided some valuable clarity. Thanks for sharing!
ReplyDeleteNot sure whether yours is a genuine comment - I found it in the spam folder.
Delete