OpenAI, the creator of ChatGPT, has released a new version of the much-discussed chatbot to much fanfare. Six questions and answers on the advantages and risks of this new model.
Is there already a successor to ChatGPT?
Indeed, it was only recently that the first version of ChatGPT was released. The chat program launched late last year is versatile and can answer questions, make summaries or write poems. ChatGPT immediately caused a lot of concern in the education field, as students enthusiastically used it.
OpenAI Creator present Tuesday with great fanfare the successor to the AI model that powers ChatGPT. Now it’s still GPT-3.5, the new version is GPT-4.
Greg Brocman (@gdb) or OpenAI just demonstrated GPT-4 by creating a working website from an image of a sketch from his notebook.
It’s the coolest thing I’ve ever seen in tech.
If you extrapolate from this demo, the possibilities are endless.
A glimpse into the future of computing. pic.twitter.com/1QB6wbQkld
— McKay Wrigley (@mckaywrigley) March 14, 2023
What exactly has been improved?
Not bad. In the words of OpenAI founder Sam Altman, he is more creative, makes fewer mistakes, and is less prone to bias (badly anchored prejudices and stereotypes). And perhaps the most striking: for the first time, this GPT version is also able to interpret images. In addition, the vocabulary has grown considerably. All this makes the new model much more versatile.
Breakup 🚨
OpenAI just filed GPT-4 and it can literally blow your mind 🤯
GPT-4 is a large multimodal model that can accept image and text inputs and emits text outputs.
Capabilities of GPT-4 in a thread 🧵👇 pic.twitter.com/MCfRf6KKec
— Shubham Saboo (@Saboo_Shubham_) March 14, 2023
How do these images work?
The idea is that the user can send an image to ChatGPT and then ask a question about it. OpenAI itself gives several examples. For example, an image of a phone with an old-fashioned cable. GPT-4 may explain what’s wrong with this.
Or a photo of some ingredients, on which the program suggests recipes. Perhaps most impressive is the example of a sloppy handwritten sketch for a website, with the assignment to GPT-4: make it a working website.
Unfortunately, all this cannot yet be tested in practice. OpenAI says it needs more time to make this new component abuse-proof, without going into detail about it.
Greg Brocman (@gdb) or OpenAI just demonstrated GPT-4 by creating a working website from an image of a sketch from his notebook.
It’s the coolest thing I’ve ever seen in tech.
If you extrapolate from this demo, the possibilities are endless.
A glimpse into the future of computing. pic.twitter.com/1QB6wbQkld
— McKay Wrigley (@mckaywrigley) March 14, 2023
The first version of ChatGPT was also criticized. How has it been integrated into the new version?
Fairly soon after the initial surprise and excitement about ChatGPT, there was indeed dissent: the program talks nonsense convincingly, is basically pretty dumb, makes weird mistakes, and contributes to misinformation.
GPT-4 cannot eliminate all objections, but according to its creator it is considerably more capable than its predecessors and “demonstrates human-level performance in various professional and academic benchmarks”.
As proof, OpenAI is speeding up some exams, with the GPT-4 now ranking in the top 10%, instead of the bottom 10% of the previous version. At the same time, founder Sam Altman is remarkably humble this time around, noting that his brainchild is still flawed and limited, and “seems more impressive on first use than after spending more time with it.”
meet GPT-4, our most capable and aligned model to date. it is available today in our API (with a waiting list) and in ChatGPT+.https://t.co/2ZFC36xqAJ
it’s still flawed, still limited, and it still looks more impressive on first use than after spending more time with it.
—Sam Altman (@sama) March 14, 2023
Has the criticism died down?
At least. Even though GPT-4 “hallucinates” much less often than its predecessors, according to its creator, you as a user still cannot be sure that the program is always telling the truth. Critics say this is particularly problematic in combination with a search service such as Microsoft’s Bing – which already uses GPT-4.
Another concern is OpenAI’s lack of openness. He support research of the AI company is a joke, according to Gary Marcus (professor of neuroscience at New York University): OpenAI pretends to be scientific when it is not. “It’s a step backwards for science” he states in his newsletter. “We don’t know what his size is (model, ed.) East; we don’t know what the architecture is, we don’t know how much energy was used; we don’t know how many processors were used; we don’t know what it was formed on.
I think we can call it closed on “open” AI: the 98-page document introducing GPT-4 proudly states that they don’t disclose *anything* about the contents of their training set. pic.twitter.com/dyI4Vf0uL3
– Ben Schmidt / @[email protected] (@benmschmidt) March 14, 2023
Can I already use it?
OpenAI already makes GPT-4 available in ChatGPT, but only on the paid plan ($25 per month) plus. And for the moment without photos. Additionally, Microsoft’s Bing already appears to be using the new model for its chat search service, and many apps that use GPT-4 will no doubt be available in the near future. An example is the BeMyEyes app, developed for the blind and visually impaired.
“Food expert. Unapologetic bacon maven. Beer enthusiast. Pop cultureaholic. General travel scholar. Total internet buff.”