Overabundance of social media posts, political divide challenges content moderators

TORONTO — Leigh Adams has seen a steady increase in material to review since she began moderating user comments on websites about 14 years ago, but she says the volume has only exploded over the years, as the nature of the content has become so divisive that there is only one word for it: “Bonkers.”

Misinformation, trolling and worse have always existed online, but Adams says she saw a shift after the US elected President Donald Trump in 2016 that hit a new high when George Floyd, a black man in Minneapolis, was killed in police custody in May 2020, stoking racial tensions just as the world was locked down due to the COVID-19 pandemic.

“It really was the perfect storm… At the time, the internet was already struggling with ‘How do you reconcile anonymity and accountability? How can we ensure that we amplify the voices of those who might not be heard? said Adams, director of moderation services at Viafoura, a Toronto-based company that reviews user content for publishers.

“We hadn’t resolved that and we still haven’t resolved it, but you have these (events) on top of that, it just made it worse.”

Adams noted that Trump was absent and the return of pre-pandemic activities slightly drowned out the “fiery rhetoric” seen by Viafoura’s more than 800 customers, which include media brands CBC, Postmedia and Sportsnet.

But it expects future “bloat” and other content moderation companies say they have detected no significant signs of the onslaught backing down. It’s likely that keeping pace will involve tackling an ever-changing set of challenges.

Moderators predict that health misinformation will continue to spread rampantly, with questionable posters becoming even more sophisticated in their attempts to disrupt platforms and a slew of new regulations targeting online mischief in Canada and abroad.

“I don’t see demand going down anytime soon, despite all the talk of the recession,” said Siobhan Hanna, chief executive of Telus International and global vice president of artificial intelligence.

“For better or worse, this need for content moderation will continue to grow, but it will take smarter, more efficient, thoughtful, representative and risk-mitigating solutions to manage the increased demand.”

Hanna says video is becoming one of the toughest areas because moderators are no longer content to rewatch clips depicting violence, indecency or other harm that can be difficult to watch.

Now there are also what are called deep fakes – videos where someone’s face or body has been digitally spliced ​​into the frame so that it appears to be doing or saying things it hasn’t. never done.

The tech hit TikTok, when visual effects artist Chris Umé released clips claiming to be actor Tom Cruise playing card tricks, eating a gum-filled lollipop and performing the Dave Matthews Band song. “Crash Into Me”.

“I don’t think anyone is going to be hurt by…the videos they create, but it also gets us all used to these deep fakes and maybe distracts our attention from the more sinister apps, where it could affect the course of an election, or it could affect health care outcomes or decisions made about crimes,” Hanna said.

In Ireland, for example, videos purporting to show political candidates Diane Forsythe and Cara Hunter performing sexual acts were released as they ran for office. office earlier this year.

“I keep being surprised,” Adams said. “You see the worst thing and then something else happens, you think, ‘what could happen next?'”

Her team recently found a photo that appeared to be a sunset at first glance, but 17 layers back, showed a naked woman.

“If we hadn’t had five people watching this, it would have been live and up there,” she said.

“It’s getting more and more sophisticated, so you have to find new artificial intelligence (AI) tools that are going to keep digging deeper.”

Most companies rely on a mix of human moderators and AI-based systems to review content, but many like Google have conceded that machine-based systems “are not always as precise or granular in their content analysis than human reviewers”.

Adams sees the follies of AI when people invent and popularize new terms – “seggs” instead of sex, “unliving” instead of dead, and “no seeing” instead of “nazi” – to avoid d be flagged by moderators, security filters and parental controls.

“As long as it takes the machines to learn this, this news cycle is over and we’re moving on because they’ve found a new way to say it,” Adams said.

But humans aren’t perfect either, and often can’t keep up with content volumes on their own.

Two Hat, a moderation company in Kelowna, British Columbia, used by games brands Nintendo Switch and Rovio and owned by Microsoft, has grown from processing 30 billion comments and conversations a month before the health crisis to 90 billion in April 2020. Microsoft Canada did not provide more recent data. figures, with spokeswoman Lisa Gibson saying the company is unable to discuss trends at this time.

Facebook, Instagram, Twitter, YouTube and Google warned users in 2020 that they were taking longer to remove harmful posts at the start of the pandemic and that staff were retreating to their homes, where viewing sensitive content was more difficult and in some cases prohibited for security reasons.

When asked if the backlogs had been cleared, Twitter declined to comment, and Facebook and Instagram did not respond. Google temporarily relied on more technology to remove content violating its guidelines at the start of the pandemic, leading to an increase in the total number of video removals, spokesperson Zaitoon Murji said. The company expects to see a drop in video deletions as it scales back this technology as more moderators return to the office, she added.

As backlogs built up, countries toughened their stance on harmful content.

The EU recently reached a historic agreement demanding the rapid removal of harmful content online, while Canada promises to introduce an anti-hate online bill soon, after an earlier version was shelved in a federal elections.

Adams says the convergence of COVID-19, the rise of Trump and the murder of Floyd have made publishers more willing to take a stand against problematic content such as hate speech and health misinformation. Legislation, which can vary from country to country and often be left to interpretation, could cause companies to have even less tolerance and remove anything that might be considered problematic, he said. she declared.

The stakes are high because leaving too much problematic content on a platform can make it dangerous, but removing too much can also interfere with free speech, said Anatoliy Gruzd, professor of information technology management at the Metropolitan University of Toronto.

“From the user side, it can feel like there isn’t enough effort to make the platforms a welcoming and safe place for everyone, and that’s partly because the platforms -forms are getting so huge, with millions and billions of users at once,” he said.

Gruzd doesn’t see the balance between security and freedom getting any easier as the political patchwork evolves, but thinks society will evolve to consider limits and what it’s okay to be exposed to and what isn’t.

He said: “Some people will vote with their usage, whether they stop using Facebook or Twitter for certain things, they might decide to go to other platforms with or without too much moderation or they might decide to stop completely from using social media.”

This report from The Canadian Press was first published on May 22, 2022.

Previous Several Adventist Publications Receive Top Honors at Associated Church Press Annual Convention
Next Grid Island - Announcements - e-flux