
In the case of churros, ChatGPT would also appear to have simply accepted the input received from userswho asked him to describe the surgical properties of the Spanish sweet, limiting himself to merge the information found on it and superimposing them on the “surgery” topic, without realizing that the result was meaningless. ChatGPT could also have limited itself to replacing the term “churros” in texts where it was present, for example, “scalpel”.

Thanks to us
But if so, why ChatGPT and the other related tools they often write sensible things, instead of producing sentences without logic on every occasion? Again Gary Marcus explains how the merit is not so much of ChatGPT, but of us human beings. “The immense database that ChatGPT has access to consists entirely of language emitted by humans with statements that (usually) are based on the real world”. As a result, ChatGPT often seems to be making sense because it is putting together – recomposing – things things actually said by real people. Furthermore, ChatGPT uses statistics to understand (even with the errors we have seen) which properties are more likely to combine correctly with others.
From a certain point of view, ChatGPT is the king of supercazzole: sentences that seem to make sense even though they lack it, but constructed in a way that can deceive those who don’t know the topic being talked about. As explains the Tech Review, “it is still necessary that a user recognizes a wrong answer or a misunderstood question. However, this approach doesn’t work if we want to ask a model like GPT something we don’t already know the answer to”.
And it is for this reason that, contrary to what is claimed by many, ChatGPT cannot replace search enginesbecause unless we’re always asking questions about issues we’re already well versed on, we have no way of knowing if the answer that ChatGPT is providing us is it correct or is it made up.
Googlealthough not always very reliable, for the moment it can sleep peacefully: “There is no way to train a large language model for that separate fact from fiction. And creating a model that is more cautious about providing answers would often prevent it from also giving answers that would later turn out to be correct.explained the technology manager of OpenAI Mira Murati.
All time Open AI is working on another system, called WebGPT, who can search the web for the requested information and also provide the sources used. ChatGPT could be upgraded to get this skill within a few months. For the moment, however, it is advisable not to trust in any way the information retrieved with this software: more than Google, Chat GPT seems to compete with Count Lello Mascetti.
Thank you for reading this post, don't forget to subscribe!