SQUIZ-KIDS-SHORTCUT-TILES

ChatGPT

Comprehension Activities

Hi Squiz Kids! Welcome back to another episode of Squiz Kids Shortcut, the podcast that makes learning fun and easy. I’m Bryce and with me is my co-host Amanda.

Hi everyone, today we’re going to talk about something really cool and a bit mysterious. Have you heard of ChatGPT?

ChatGPT, that sounds interesting. What is it?

Well, ChatGPT is actually a computer program that has been designed to simulate human-like conversations. It’s one of the most advanced artificial intelligence (AI) models out there and has been trained on massive amounts of data.

Wow, that sounds complicated. How does it work?

Great question, Bryce! ChatGPT works by using a neural network, which is a type of computer system that can recognize patterns. It analyzes the input it receives, understands the context, and produces a response that is most likely to be appropriate.

So, it’s like talking to a robot?

Exactly! ChatGPT can have a conversation with you just like a human would. It can answer questions, tell jokes, and even have debates!

——

AND… it can write podcasts. Everything you and I have just read out was actually written by ChatGPT.

Whaaat? How did you get it to do that?

I typed in “Write a Squiz Kids Shortcut podcast for primary school kids explaining chatgpt” and this is what I got. Which is why you said a few things that were totally out of character… like “Chat GPT, that sounds interesting. What is it?” What would you have said in real life?

Something along the lines of… Oh yes, we’ve heard SO much about ChatGPT lately. High schools and universities are really worried that students will start using it to cheat on their assignments. But before we get into that, let’s take you through WHAT exactly ChatGPT is, HOW it works, and then move on to WHY people are worried about it.

Yep, that sounds more like you! I’ll put the full transcript of ChatGPT’s effort to write a podcast in your episode notes… but for now, let’s break it down old-school, human style.

WHAT
So ChatGPT has already told us that it’s basically a computer program that you can talk to.

Yeah, that’s right. You type in a question, like “I have celery, tomatoes, lentils, and bacon in my kitchen. What can I cook for dinner tonight?” It uses machine learning to try to guess what words might come after your question – and given that it has been trained on a whole lot of data, including recipe books, it tries to give you a recipe! In this case, it suggested a lentil and bacon soup with a side salad. It then wrote out the recipe – which included celery and tomatoes, by the way – and gave instructions on how to cook it.

That actually sounds quite delicious. Now ChatGPT was created by a research company called OpenAI … with AI standing for artificial intelligence… and it’s free to create an account.

That’s right. It took me about two seconds to start using it. You type a question or command into a text box. It can be something as simple as “What’s a good birthday present for a 10 year old”, or something as complicated as questions from a university chemistry exam. I’ve done both, by the way, and the first thing on the birthday present list was a board game or puzzle.

So what if that particular ten year old didn’t like puzzles?

You can add more instructions or details – for example, when I refined my question to ask what a good recent book would be for a ten year old who loves sport, it suggested “The Crossover” by Kwame Alexander – the story of twin brothers who are basketball players, and their struggles on and off the court. Which is actually an excellent book – although it’s from 2015, so I’m not sure I’d say it was “recent”.

So HOW on earth does it know all this stuff? And HOW is it able to write like a real person?

HOW
The most important thing to point out is that ChatGPT doesn’t KNOW anything. It has not been programmed with information. Instead, it has been given lots of sentences from the internet, and trained to guess what word might come next when talking about a topic. So when you type in a question, it tries to predict what should come right after the question – the answer! ChatGPT already told us that it uses something called a “neural network”. The term “neural” comes from biology, and means relating to the nerves, or nervous system. A neural network teaches computers to process information in a way that’s inspired by the way that the human brain works. It’s a type of machine-learning process, called deep learning, that uses interconnected computer nodes in a layered structure that looks a bit like the human brain.

Ummm … you lost me at neural … so you’re saying that computers have learned to think just like the human brain?

Not exactly. The network isn’t thinking and feeling like you and I think and feel. But it is looking for patterns, the same way that our brains do. For example, when you are sight reading, you’ll often miss a typo, because your brain KNOWS what letter or would should come next. You’re using a whole lot of information you’ve learned already – like what letters of the alphabet look like, and what words they form when they appear in certain combinations and a certain order. ChatGPT has been given a ton of information from the internet, and has learned to predict which words should come next in its answer to you.

And it can write its answer in lots of different styles – you can ask it to write a formal letter to your principal, or a silly rhyming birthday poem for a friend, and it can easily find and use the appropriate words.

But … and this is a big but… when ChatGPT answers your questions, it doesn’t tell you its sources, nor who created the original information. It’s not actually searching the internet as you use it, like Google does. It was given an enormous amount of data from the internet, but that hasn’t been updated for a year or two. And it searches that data NOT to look for facts, but to look for language patterns, and which words are most likely to go together to create a believable answer.

Squiz-E the Newshound teaches us every week that not everything we read on the internet is true. So if ChatGPT trained on information from the internet, and some of the information on the internet is untrue – isn’t it possible that it’s serving up info that’s just plain wrong?

Yep – it sure does. Which is why we have to be really careful about relying on it as a truthful source of information. Its programmers tried to teach it to say the correct answer. The issue is, the people training it were only human, so if ChatGPT gave a plausible – meaning believable – sounding incorrect answer, it would get rewarded when the researchers thought it was correct. ChatGPT isn’t trying to be RIGHT… it’s trying to sound convincing enough that people will think its right. That’s how it was taught.

Yikes. I’m guessing that’s WHY some people are worried about this new technology.

 

WHY
There are plenty of examples of ChatGPT getting factual things wrong, but writing them in a very convincing way. Which could cause you some problems if you ask it to do your homework. Remember I said that I asked it some chemistry questions? It answered incredibly confidently, but it was wrong.

Then there’s the problem of ChatGPT being correct, and people using it to cheat. Lots of schools have banned ChatGPT, and many teachers are now using software that can detect whether ChatGPT was used to write an answer.

One professor I know said that she is using ChatGPT with her uni students, in a clever way. She asks ChatGPT a question, gives her students the answer, and then asks them to go through and critique it… meaning they have to find all of the mistakes that ChatGPT has made.

That’s a smart idea!

The other thing that kids need to keep in mind is that users of ChatGPT are supposed to be 18 or older… although when you create an account, you aren’t asked your age.

That’s a problem, because younger kids could come across content that’s not age-appropriate.

Although, when I asked ChatGPT what the best fart joke was, it wrote back: “As an AI language model, I don’t have a personal preference for jokes. However, it’s important to keep in mind that some people may find fart jokes inappropriate or offensive. It’s always a good idea to consider your audience and whether the joke is appropriate for the situation before telling it. If you do decide to tell a fart joke, make sure it’s in good taste and doesn’t offend anyone.” So… that was a little disappointing.

THE S’QUIZ
This is the part of the podcast where you get to test how well you’ve been listening…
1. Chat GPT is an example of a neural… what?
2. True or false – ChatGPT has been programmed with millions of facts to answer all your questions.
3. True or false – Kids aged 13 – 18 are allowed to use ChatGPT.