[ad_1]

Chat GPT it is undoubtedly a very powerful tool and capable of producing particularly interesting contents in a great variety of fields. Precisely this ability, however, makes it particularly attractive also for cybercriminals, who are already beginning to exploit it to carry out attacks on companies and individuals. Hermesan Italian company that deals with cybersecurity, took stock of the situation by listing the main risks associated with ChatGPT.

ChatGPT as a cybersecurity risk: the Hermes perspective

The first way in which criminals have exploited ChatGPT is by emulating it: the great popularity that this tool is experiencing has meant that it was easy for criminals to create sites and applications that copy the original, so as to steal information from unsuspecting users. In fact, these are normal phishing attacks, not even particularly original, but still effective.

Where ChatGPT really comes into play is in phishing attacks where it is precisely the AI ​​that creates the texts, with unprecedented accuracy and quantity. In fact, the use of AI makes it possible to package sophisticated messages, without the classic linguistic errors typical of the Nigerian prince’s fraud, in large quantities and making them personalized for each recipient. This also applies to the landing pages sites built to scam users or steal their credentials.

Then there is also the risk that the user will share sensitive information in the use of ChatGPT and similar tools, thus potentially making that information available to others.

We asked Lorenzo AsuniCMO in Ermes, how the company intends to fight these new threats and if the usual indications in the fight against phishing are not still valid. “It will not only be about emails as the only attack channel but it will be easier to build phishing sites, advertising campaigns, display banners and increasingly effective and ‘sewn’ communications on the individual profiles of people or companies identified within the attacks by cybercriminals “Asuni tells us. “Obviously the usual recommendations apply, but the potential to carry out ‘attacks’ massively and quickly, combined with the ability to make them completely built on individual users or groups, will make them increasingly difficult for a single person to detect: for this you need to protect yourself with AI We at Ermes are adapting our own proprietary technology that is able to verify and block the sharing of credentials (email and password) or other sensitive information that is established upstream on forms, sites or platforms such as ChatGPT. We analyze the texts that the user shares through the various channels and soon we will be able to block their sending, or inform the user that he is sharing sensitive information.”

However, the problem remains of how to deal with this new threat: can fire be fought with more fire, or AI with other AIs? “Cybercriminals have been using AI for a long time to maximize the scale and effectiveness of their attacks,” Asuni tells us. “For this reason, the use, as we do at Ermes, of artificial intelligence in defense of people is becoming increasingly fundamental. Today, thanks to machine learning, it is possible to study millions of data useful for the investigation, prevention and detection of attacks, which often have similar schemes (reusing the same tools such as malicious code, scripts, etc.), so with the right tools you can defuse their effectiveness.”

.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *