Dec 2 (Reuters) – Elon Musk’s Twitter relies heavily on automation to moderate content, removing some manual reviews and favoring distribution restrictions rather than outright removal of some speech, Reuters has been told its new head of trust and safety.
Twitter is also more aggressively restricting hashtags and search results conducive to abuse in areas such as child exploitation, regardless of potential impacts on “benign uses” of those terms, the VP said. from Twitter, Trust and Safety Product, Ella Irwin.
“The biggest thing that’s changed is that the team is fully empowered to move fast and be as aggressive as possible,” Irwin said Thursday, in the first interview a Twitter executive has given since. Musk’s acquisition of the social media company in late October.
His comments come as researchers report an increase in hate speech on the social media service, after Musk announced an amnesty for accounts suspended under the company’s previous leadership that had not broken the law or are delivered to “blatant spam”.
The company has faced pointed questions about its ability and willingness to moderate harmful and illegal content since Musk cut Twitter’s staff by half and issued an ultimatum to work long hours, resulting in the loss of hundreds of additional employees.
And advertisers, Twitter’s main source of income, have fled the platform over concerns about brand safety.
On Friday, Musk promised “a significant strengthening of content moderation and the protection of freedom of expression” during a meeting with French President Emmanuel Macron.
Irwin said Musk encouraged the team to worry less about how their actions would affect user growth or revenue, saying security was the company’s top priority. “He insists on this every day, multiple times a day,” she said.
The approach to safety Irwin described at least partly reflects an acceleration of changes that had already been planned since last year around Twitter’s handling of hateful behavior and other policy violations, according to former employees. familiar with this work.
One approach, captured in the industry mantra “free speech, not free access”, is to leave certain tweets that violate company policies but block them from appearing in places like the house timeline and research.
Twitter has long deployed such “visibility filtering” tools around misinformation and had already incorporated them into its official hateful conduct policy before Musk’s acquisition. The approach allows for freer speech while reducing the potential harm associated with abusive viral content.
The number of tweets containing hateful content on Twitter rose sharply the week before Musk tweeted on November 23 that impressions, or views, of hate speech were down, according to the Center for Countering Digital Hate – in an example from researchers pointing to the prevalence of this content, while Musk touts reduced visibility.
Tweets containing anti-black words that week were triple the number seen the month before Musk took office, while tweets containing a gay slur rose 31%, the researchers said.
‘MORE RISK, GO FAST’
Irwin, who joined the company in June and previously held security roles at other companies including Amazon.com and Google, pushed back against suggestions that Twitter didn’t have the resources or the will to protect the platform. -form.
She said the layoffs had no significant impact on full-time employees or contractors working in what the company called its “Health” divisions, including in “critical areas” like safety children and content moderation.
Two sources familiar with the cuts said more than 50% of the health engineering unit had been made redundant. Irwin did not immediately respond to a request for comment on the claim, but has previously denied that the health team was seriously affected by the layoffs.
She added that the number of people working on child safety had not changed since the acquisition and the team’s product manager was still there. Irwin said Twitter has replaced some positions for people who have left the company, though she declined to provide specific numbers on the magnitude of the revenue.
She said Musk was focused on increasing the use of automation, arguing that the company had in the past erred in using time-consuming and laborious human reviews of harmful content.
“He encouraged the team to take more risks, to move fast, to secure the platform,” she said.
On child safety, for example, Irwin said Twitter has moved to automatically deleting tweets flagged by trusted figures with a proven track record of accurately flagging harmful posts.
Carolina Christofoletti, a threat intelligence researcher at TRM Labs that specializes in child sexual abuse material, said she recently noticed that Twitter was removing content as quickly as 30 seconds after reporting it, without acknowledge receipt of their report or confirm their decision.
In Thursday’s interview, Irwin said Twitter removed about 44,000 accounts implicated in child safety breaches, working with cybersecurity group Ghost Data.
Twitter is also restricting hashtags and search results commonly associated with abuse, such as searching for “teenage” pornography. Past concerns about the impact of these restrictions on permitted uses of the terms have disappeared, she said.
Using “trusted reporters” was “something we’ve discussed in the past on Twitter, but there was some hesitation and frankly just a little bit of lag,” Irwin said.
“I think we now have the ability to move forward with things like this,” she said.
Reporting by Katie Paul and Sheila Dang; edited by Kenneth Li and Anna Driver
Our standards: The Thomson Reuters Trust Principles.
#Exclusive #Twitter #exec #acting #fast #moderation #harmful #content #rises