[ad_1]

It was a cold wind that blew through St. Peters Square at the Vatican over the weekend, but that didn’t deter Pope Francis from taking a stroll outside to greet the faithful, as he often does. When images appeared online showing the 86-year-old pontiff atypically wrapped up against the elements in a stylish white puffer jacket and silver bejewelled crucifix, they soon went viral, racking up millions of views on social media platforms. 

The picture, first published Friday on Reddit along with several others, was in fact a fake. It was an artificial intelligence rendering generated using the AI software Midjourney.

960x0.jpg
Fake photos generated by artificial intelligence software appear to show Pope Francis walking outside the Vatican in a designer coat, which he never did.

While there are some inconsistencies in the final rendered images — for example, the pope’s left hand where it is holding a water bottle looks distorted and his skin has an overly sharp appearance — many people online were fooled into thinking they were real pictures.

The revelation that they had been dupped left some Twitter users shocked and confused.

“I thought the pope’s puffer jacket was real and didn’t give it a second thought,” tweeted model and author Chrissy Teigen. “No way am I surviving the future of technology.” 

The “pope in the puffer jacket” was just the latest in a series of “deepfake” images created with AI software. Another recent example was pictures of former President Donald Trump that appeared to show him in police custody. Although the creator made it clear that they were produced as an exercise in the use of AI, the images, combined with rumors of Trump’s imminent arrest, went viral and created and entirely fraudulent but potentially dangerous narrative.

Midjourney, DALL E2, OpenAI and Dream Studio are among the software options available to anyone wishing to produce photo-realistic images using nothing more than text prompts — no specialist training required.

As this type of software becomes more widespread, AI developers are working on better ways to inform viewers of the authenticity, or otherwise, of images.

CBS News’ “Sunday Morning” reported earlier this year that Microsoft’s chief scientific officer Eric Horvitz, the co-creator of the spam email filter, was among those trying to crack the conundrum, predicting that if technology isn’t developed to enable people to easily detect fakes within a decade or so “most of what people will be seeing, or quite a lot of it, will be synthetic. We won’t be able to tell the difference.”

In the meantime, Henry Ajder, who presents a BBC radio series entitled, “The Future Will be Synthesised,” cautioned in a newspaper interview that it was “already very, very hard to determine whether” some of the images being created were real.


How synthetic media, or deepfakes, could soon change our world

13:56

“It gives us a sense of how bad actors, agents spreading disinformation, could weaponize these tools,” Ajder told the British newspaper, I.

There’s clear evidence of this happening already.

Last March, video emerged appearing to show Ukrainian President Volodymyr Zelenskyy telling his troops to lay down their arms and surrender. It was bad quality and quickly outed as a fake, but it may have been merely an opening salvo in a new information war.

So, while a picture may speak a thousand words, it may be worth asking who’s actually doing the talking.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *