Women are harassed every 30 seconds on Twitter, major study finds

 人参与 | 时间:2024-09-22 01:03:01

If you're a woman with an opinion on Twitter, chances are you've been harassed, abused, or even threatened. But, a new major study by Amnesty International has explored the horrifying prevalence of this abuse.

The study — conducted with global artificial intelligence software company Element AI —analysed 228,000 tweets sent to 778 women politicians and journalists in the UK and U.S. in 2017.

SEE ALSO:The most harassed women online share why they’re not logging off

The findings revealed that 1.1 million "abusive or problematic tweets" were sent to the women in the study across the year — one every 30 seconds on average.

Women of colour were found to be more likely to be targeted — black women are 84 percent more likely than white women to be mentioned in "abusive or problematic" tweets.

The research revealed that abuse is experienced by women "across the political spectrum" in both the U.S. and UK.

The research has resulted in the creation of the "world's largest crowdsourced dataset" about online abuse against women, per Amnesty International’s senior advisor for tactical research, Milena Marin.

Mashable Top StoriesStay connected with the hottest stories of the day and the latest entertainment news.Sign up for Mashable's Top Stories newsletterBy signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

Now that this dataset is in place, there's data and research to "back up what women have long been telling us," said Marin in a statement. "That Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked," she added.

“We found that, although abuse is targeted at women across the political spectrum, women of colour were much more likely to be impacted, and black women are disproportionately targeted. Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalised voices," said Marin.

Vijaya Gadde, Twitter's legal, policy, trust and safety global lead told Mashable in a statement that Twitter has "publicly committed to improving the collective health, openness, and civility of public conversation on our service."

Gadde said that Twitter uses "a combination of machine learning and human review to adjudicate abuse reports and whether they violate our rules."

Context matters when evaluating abusive behavior and determining appropriate enforcement actions. Factors we may take into consideration include, but are not limited to whether: the behavior is targeted at an individual or group of people; the report has been filed by the target of the abuse or a bystander; and the behavior is newsworthy and in the legitimate public interest. Twitter subsequently provides follow-up notifications to the individual that reports the abuse. We also provide recommendations for additional actions that the individual can take to improve his or her Twitter experience, for example using the block or mute feature.

According to an Amnesty statement, Twitter received the findings of the report and "requested clarification on the definition of 'problematic' citing the "need to protect free expression and ensure policies are clearly and narrowly drafted."

Gadde also commented on the use of the word "problematic." "I would note that the concept of “problematic” content for the purposes of classifying content is one that warrants further discussion," Gadde said in Twitter's official response to the report which was emailed to Mashable.

"It is unclear how you have defined or categorised such content, or if you are suggesting it should be removed from Twitter," Gadde continued. "We work hard to build globally enforceable rules and have begun consulting the public as part of the process - a new approach within the industry."


Featured Video For You
This cyborg houseplant can automatically drive itself toward light
顶: 7踩: 15