Tech Tent: Can social media stop abuse?

As pressure to confront all kinds of abuse has grown, the platforms have employed thousands of moderators and, when they were overwhelmed by the volume of reports, turned to automation. Facebook, for instance, takes pride in the fact that most terrorism-related posts are wiped by AI, boasting in 2017 that it detected 99% of posts by al-Qaeda and the Islamic State group before being alerted by users.

But this week that tech showed its limitations. When the BBC’s Cristina Criddle reported a comment featuring an orangutan emoji on Bukayo Saka’s Instagram page she got this reply: “Our technology has found that this comment doesn’t go against our community guidelines.”

The message went on to concede “our technology isn’t perfect” and, a couple of days later, Instagram boss Adam Mosseri admitted mistakes had been made. Replying to a tweet from Cristina, he said: “We have technology to try and prioritise reports, and we were mistakenly marking some of these as benign comments, which they are absolutely not. The issue has since been addressed.”

But technology writer Charles Arthur tells Tech Tent he is not convinced that Instagram’s owner Facebook is trying hard enough: “The technology is far from perfect because the humans who are sorting it out are far from perfect.”

He says teaching the system that sending orangutan emojis to a black footballer should not be that hard: “It would be the work of an hour at most, perhaps for any competent programmer, to make this sort of change and to flag that sort of thing to make sure that other offensive emoji couldn’t be sent.”

Emoji issue

Arthur, former technology editor of the Guardian, has written a book called Social Warming which describes the pernicious impact of social media on everything from journalism to politics and comparing it to climate change.

His criticism is targeted in particular at Facebook which he insists has a long history of failing to foresee and prepare for predictable ways that people might abuse its platform: “I think, always with Facebook that it expects a bit too much from people. There always seems to be this hope that, just this one time, people are actually going to be much more pleasant, and it tends to be disappointed every time.”

Politicians are promising tighter regulation of the tech giants, with the long-awaited Online Safety Bill threatening huge fines if they don’t act swiftly enough to remove harmful material from their platforms. Defining what is harmful but not illegal content will however, be another challenge for Ofcom as it is given the new job of policing the internet as well as the telecoms and media industries.

Poisoned well

In the meantime, some are coming up with swifter solutions. Bill Mitchell of the BCS, the Chartered Institute for IT, tells Tech Tent about one idea – making social media companies verify the identity of their users.

“The underlying idea is simply that if you are accountable for your behaviour, if you’re accountable for what you say, if we can find out who you are, if you do something which is totally abhorrent and unacceptable, then that is going to really severely restrict your desire to do those things.”

But even some BCS members are sceptical about this idea, pointing out that it could discriminate against people without a driving licence or other form of ID and that the data collected by the social media companies would prove a honeypot for hackers. In any case, some of those involved in the racial abuse of footballers were not shy about using their real names.

When social media first came on the scene it seemed to promise a new era of democratic communication, with politicians and voters, celebrities and their fans able to connect on equal terms.

But that dream has died, the well has been poisoned, and there is no agreement on just how to make things better.

News Source – https://www.reportdoor.com/

News Reporter