Search this website
Facebook has been playing whack-a-mole with a number of problems of late. The social networking site has been accused of not doing enough to curb the menace of fake news articles, having an organisational gender bias, allowing fake accounts to continue to be active and not taking down graphic videos of rape or murder quickly enough. From as far back as 2012, Facebook policies for content moderation seem to have the mind of a teenage male.
More recently, videos of a fight related to the murder of a young woman in Canada, the live video of the killing of a child in Thailand, a live video of the rape of a woman in Sweden, an accidental suicide of a thirteen year old on Instagram, the sexual assault of a teenage girl by multiple men in the United States and the live video of a random killing by a deranged man have made it to news reports.
There were calls for Facebook to take stronger measures to prevent these videos from spreading on the social network, and to take them down quicker. At the F8 developer conference, Mark Zuckerberg indicated that more will be done to prevent a repeat of tragedies such as the killing of a random man on Facebook live. “We have a lot of work, and we will keep doing all we can to prevent tragedies like this from happening,” Zuckerberg said. Facebook has now announced that it will be hiring 3,000 people over the course of the next year, in addition to 4,500 existing moderators, to restrict the reach of live videos with graphic content.
Facebook has indicated that it will be using artificial intelligence to identify graphic videos on live streams. With the increasing proliferation of cameras and the strong focus that Facebook has on video content, the number of people posting graphic content on videos, live videos, 360 degree videos, and 360 degree live videos are all set to increase exponentially. 7,500 moderators may not be enough to tackle the situation, according to MIT Tech Review. The number is a disproportionate amount of manpower dedicated to curbing graphic videos, as compared to the number of human moderators thrown at other problems, such as that of fake news, according to a report in Vox.
Facebook has been using artificial intelligence to shut down fake accounts as well, identifying the common patterns shown by these accounts. Facebook has indicated that a majority of the problematic posts that its moderators respond to, are first tagged by machine learning algorithms. Facebook has made a concerted efforts to curb the circulation of “revenge porn” on the social networking platform. However, the police from various countries have claimed that Facebook has allowed the circulation of objectionable content, including child porn, even after the moderators of the site were directly informed of such content.
In April, Facebook shareholders demanded a report on the steps the company is taking to reduce the number of fake news articles circulating on the web site, as well as addressing concerns on gender pay equality. Facebook’s board of directors recommended a vote against both the questions. Recently, Facebook denied that women engineers in the company were paid less than their male counterparts. As an answer to the question of fake news, Facebook COO Sheryl Sandberg pointed out that the platform was not an arbiter of truth.
Facebook has shut down 30,000 fake accounts in France that were circulating news articles in the run up to presidential elections in that country. Facebook has provided $14 million in funding to a News Integrity Initiative. Academics, non profit organisations, the Mozilla foundation and Wikipedia founder Jimmy Wales are all part of the initiative. The initiative is a concentrated effort to rebuild the credibility of the web site in the wake of strong criticism for allowing the spread of misinformation.
In the wake of the surprise election of US presidential candidate Donald Trump, Facebook was accused of facilitating the win by allowing fake news articles to circulate on the web site. Facebook denied the allegations. Facebook tied up with independent fact checkers, and started trying out a tool in some locations that showed a pop up alert on suspicious stories that said “Disputed by multiple, independent fact-checkers,” while identifying the third party sites that disputed the stories.
However, the platform could be used in far more insidious ways to shape public opinion. A little known company called Cambridge Analytica was part of the Trump campaign, and also played a role in the Brexist memorandum. The company farmed Facebook likes to construct psychographic profiles of people before delivering them targeted communication. The company claims to have provided just the right message to the right people at the perfect timing, to help turn the tide of the elections. Facebook has indicated that it is taking steps to curb “information operations” on the platform that are more subtle than fake news.
In an interview with Fast Company, Zuckerberg pointed out that his aim was to make a more connected world, and it was important to take an initiative first to create the technologies. The new technologies are bound to have an ugly side, and then it was important to deal with the consequences. However, the benefits of the new technologies outweigh the negatives.