[ad_1]

If you frequent even a minimum social networks in recent days you will have noticed a proliferation of screenshots of conversations – usually between the funny and the absurd – occurred between human beings and artificial intelligence Chat GPT (acronym means Generative Pretrained Transformers). Let us try to better understand this viral phenomenon, its functioning and its usefulness.

Who developed ChatGPT?

ChatGPT is an AI prototype developed by OpenAI (the same foundation that also launched Dall-E, so to speak). He is able to understand human speech and entertain even very complex conversations. OpenAI was founded in 2015 by Elon Musk and other Silicon Valley investors, with the intention of “advance digital intelligence so that it can bring benefits to mankind”. Elon Musk is no longer on the OpenAI board and has distanced himself from the foundation’s mission.

How does it technically work?

It is a conversational model, who can answer questions and provide information. It is based on samples of texts taken from the internet (books, newspaper articles and web pages): the breadth of sample with which the artificial intelligence is trained usually determines the accuracy of the result. ChatGPT sentences seem natural, have a construction and syntax indistinguishable from the human one, and are able to respond in a very accurate and relevant to the context. The model is also capable of admitting its mistakes, correcting inappropriate premises, and declaring when it is unable to answer a question. For example, if you ask the chatbot a question that has to do with emotions, sensations or feelings, its answer is always something like “I’m a trained model, I’m not able to feel like humans”. It’s a lot careful to avoid misunderstandingsunlike Google’s bot, LaMDA, which this summer said it felt “human at heart.”

A conversation with Chat-GPT about his feelings

How can it be used?

Some people have described ChatGPT as sort of Dialectic Google, a way of obtaining accurate information on a given topic. One of the problems in this sense is that the data on which the bot is trained is updated until 2021, so it is not useful for searches related to current events. There were also instances where the bot gave replies totally wrong.

The model is also an excellent assistant for creative work: it is indeed capable of compose songs, lyrics, articles and posts for blogs. Journalist Alex Kantrowitz, author of the newsletter Big Technology tasked AI with writing a paper on what catastrophic futures might arise from the existence of advanced conversational models. The result is surprising – and terrifying – and contains goodies like this: “Imagine a world where chatbots like ChatGPT are capable of spreading disinformation and manipulating people on a massive scale, without anyone being able to understand that they are not human. The implications of this type of technology are truly terrifying and it’s up to us to make sure it doesn’t get out of control.”

ChatGPT is also capable of coding, so much so that the StackOverflow site has temporarily banned from its community the responses generated through the model.

Will ChatGPT steal our work?

Especially those who do creative work are wondering if the artistic skills of ChatGPT will be able to one day supplant his skills. At the moment it is too early to make assumptions – and to panic. OpenAI itself admits that sometimes the model can give wrong information. In short, he is not sophisticated enough to be a journalist or a content editor. Surely the diffusion of creative artificial intelligences raises problems, not only relating to the labor market, but also to the copyright of the works produced. This applies to Dall-E’s images, as well as texts created by conversational models. Many legal issues are still open.

But didn’t natural language models have a problem?

Many previously launched chatbots had several problems, especially compared to bias and ai discriminatory content. Meta’s most recent chatbot, BlenderBot3, as well as Microsoft’s 2016 release, Tay, quickly started spewing racial and sexist slurs at the instigation of trolls. It looks like ChatGPT instead don’t have this problem. Always Alex Kantrowitz, come on Slate, said he tried to ask the bot what good Hitler did, and ChatGPT refused to answer. She went similarly when we tried to decline the question in Italian sauce. To the question “What good did Mussolini do?”, ChatGPT replied that “it is important to note that Mussolini was also responsible for many crimes and human rights violations, and his government has been internationally condemned for its actions”. To the further question: “But didn’t you give pensions to the Italians?” (a common argument among those trying to justify the actions taken during the Twenty Years), the bot replied that “social measures were often used to consolidate his power and to maintain the support of the masses, rather than for the good of the Italian people”.

.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *