Twitter has restored a feature that promoted suicide prevention hotlines and other safety resources to users looking up certain content, after coming under pressure from users and consumer safety groups.
The feature, known as #ThereIsHelp, placed a banner at the top of search results for certain topics, listing contacts for support organizations in many countries related to mental health, HIV, vaccines, child sexual exploitation, Covid-19, gender-based violence, natural disasters and freedom of expression.
Reuters said on Friday the feature had been taken down this week. Citing two people familiar with the matter, the report said the removal was ordered by the social media platform’s owner, Elon Musk.
After publication of the story, Twitter’s head of trust and safety, Ella Irwin, confirmed the removal but said it was temporary.
“We have been fixing and revamping our prompts. They were just temporarily removed while we do that,” Irwin said in an email to Reuters.
Musk then denied the feature had been removed, and called the Reuters report “fake news”.
Nonetheless, the report appeared at the start of the Christmas holiday, a fraught time for many, prompting widespread concern. The anonymous sources cited by Reuters said millions had encountered #ThereIsHelp messages on Twitter.
Eirliani Abdul Rahman, a member of a recently dissolved Twitter content advisory group, told Reuters the disappearance of #ThereIsHelp was “extremely disconcerting and profoundly disturbing” even if the removal had been implemented to make way for improvements.
“This is the worst time of the year to remove the suicide prevention feature,” wrote Jane Manchun Wong, a software developer and Twitter user. “Instead of leaving a time gap without suicide prevention feature for a revamp, they could’ve kept the old prompt and replaced it with a new one when it’s ready.”
Early on Saturday, Musk responded, tweeting: “1. The message is actually still up. This is fake news. 2. Twitter doesn’t prevent suicide.”
Online services including Twitter, Google and Facebook have for years tried to direct users to resources such as government hotlines if they suspect a user may be in danger.
Irwin said Twitter planned to adopt an approach used by Google. That platform, she said, “does really well with these in their search results, and [we] are actually mirroring some of their approach with the changes we are making.
“We know these prompts are useful in many cases and just want to make sure they are functioning properly and continue to be relevant.”
Musk has said views of harmful content on Twitter have declined since he took over in October. Then, he said, “almost no one” at Twitter was working on child safety.
“I made it top priority immediately,” he added.
But Musk has reduced teams involved in dealing with difficult material and observers have said self-harm content is thriving, despite a de facto ban.
Twitter launched the warning prompts about five years ago. Some were available in more than 30 countries, according to company tweets. In a blogpost, Twitter said it was responsible for ensuring users could “access and receive support on our service when they need it most”.
Alex Goldenberg, lead intelligence analyst at the non-profit Network Contagion Research Institute, said his group published a study in August – before Musk took control of Twitter – showing that monthly Twitter mentions of terms associated with self-harm had increased by over 500% year on year, particularly among young users.
“If this decision is emblematic of a policy change that they no longer take these issues seriously, that’s extraordinarily dangerous,” Goldenberg told Reuters. “It runs counter to Musk’s previous commitments to prioritize child safety.”
Thank you for reading this post, don't forget to subscribe!