Hidden danger of ChatGPT: “You can’t look under the hood anymore”
And what happened? “Half the time he answered correctly!” She wanted to see if the so-called linguistic model could “intervene in typical instinctual questions”. And he sure could: “The other half of the time he wasn’t smart enough to see the hitches.”
The new version of ChatGPT is mainly characterized by the number of words it knows. “Version 3.5, which anyone can experience online, can read and answer 3,000 words. Version 4 can read and understand 6,000 to 24,000 words. This means you can ask much more complex questions.” Like math puzzles, or even submitting full exams to the text generator.
The previous version and this one can also create tests. “A university professor from the University of Groningen submitted his exam to ChatGPT and he passed it,” says Van Stegeren enthusiastically.
“I know that students submitted exam questions, but GPT4 is also the version that was integrated into the Bing search engine a few weeks ago. You can ask all kinds of things and get a good answer more and more more often. You can actually have very chatty conversations with him now like talking to a robot on Whatsapp.”
But, she continues, there are also downsides to this software. “These types of language models tend to hallucinate: they make up facts because they have too little information. For example, you can ask a language model to prove that the number twelve is a prime number, then this language model goes with it.. But of course it doesn’t. If you don’t have enough knowledge, you can’t see when such a model is talking nonsense or when what he says is correct.
In addition, there is of course also a company behind ChatGPT. “It’s super cool and innovative technology, but there’s an American tech company behind it. And that company also has shareholders and a profit motive. Over the past few years, I’ve seen this OpenAI company become less and less open about exactly how their models work. In the past, they released everything such a language model was based on, but now they proudly proclaim that they keep everything secret. We now have ChatGPT and we can talk, but we can’t see under the hood what’s happening to our data,” says the researcher.
And the data is where the danger lies with such language models. “Everyone is going to experience it wildly and it’s very good for the field. You see big companies like Slack and Discord wanting to integrate it into their services: many tech companies want to ride this huge wave of attention. But if we go integrating everywhere, OpenAI gets chunks of our data everywhere, which I find worrying, if it’s all funneled to an American company.”
“Food expert. Unapologetic bacon maven. Beer enthusiast. Pop cultureaholic. General travel scholar. Total internet buff.”