Search this website

Google introduces new system to flag offensive and upsetting content: Here's all you need to know about it

Google introduces new system to flag offensive and upsetting content: Here's all you need to know about it

Image: Reuters

By Nimish Sawant / 17 Mar 2017, 12:01

Google is upping the ante on ensuring that its search results are not offensive, and is letting review teams flag content on Google Search results that may be deemed offensive.

Google is introducing a new ‘upsetting-offensive’ category, which will let you tag offensive content such as racial slurs, content promoting hate and violence against any specific group of people based on gender, race as well as other sensitive criteria.

To flag this data, Google will be using human ‘quality raters’ and not algorithms. In fact, the idea is to train the algorithms using human intelligence, as to which search results could be deemed offensive or upsetting. For instance, if you search for ‘Holocaust History’ on Google and you get a link to an article which says ’10 reasons why the Holocaust didn’t happen’ in the search results, then that web page will be flagged under ‘upsetting-offensive’ category, which will downrank that link. The content will be visible in search results, but you will really have to go through multiple pages to come across it.

The whole idea behind flagging content thus, is to allow relevant search results to be shown front and centre, and keep offensive or inaccurate sites at bay. According to Paul Haahr, a senior Google engineer working on search quality, Google is explicitly avoiding the term ‘fake news’ for this as the company thinks that’s too vague a term.

Who is flagging the content?

If Google opened the option of flagging search results to everyone, there are high chances that there could be misuse. For instance, say if me or my group of friends does not like a certain site which has opposing views as mine, we could go on a flagging spree. There definitely has to be a responsible, objective entity which does the flagging of search results.

Spy_640

To that effect, Google has appointed around 10,000 ‘Quality Raters’ who are human contractors that Google uses globally to evaluate its search results. These raters have to conduct actual searches and observe the results. They then rate the results on the first page on the quality of answers they seem to provide.

How does it work?

Don’t worry, this is not Google censoring search results. Every time a quality rater flags a search result as ‘upsetting-offensive’ it will not result in immediate fall in the page’s ranking. Raters cannot alter Google search results if that is what you are worried about. The data that is collected from all these quality raters, is used by Google to improve its quality of search results as well as the search algorithms.

Also, the quality raters have to follow a set of guidelines when it comes to rating the pages. This is the complete 200-page PDF of Search quality evaluator guidelines

On the upsetting-offensive clause, this is all it includes:

  • Content that promotes hate or violence against a group of people based on criteria including (but not limited to) race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, or veteran status.
  • Content with racial slurs or extremely offensive terminology.
  • Graphic violence, including animal cruelty or child abuse.
  • Explicit how­ to information about harmful activities (eg, how ­to’s on human trafficking or violent assault).
  • Other types of content which users in your locale would find extremely upsetting or offensive.

According to Google, the quality raters are expected to mark links as offensive or upsetting if they are really so, based on the actual content as well as location they are in. In fact, Google says that such content should only show up in results if users are explicitly searching for it.

Image: Google

Image: Google

What happens to the flagged content?

Like we mentioned above, once a content is flagged as ‘upsetting-offensive’, it does not mean that the webpage having this content will be banned by Google or face an immediate demotion in the search results. Quality raters’ flagging of content will be used as training material for Google’s machine learning systems as well as its human engineers who work on the Google search algorithm to fine tune the search results. The main takeaway for Google is to be able to identify such offensive content on its own, before it becomes a huge issue and is brought to its notice by offended users. Google uses this data so that its search algorithms themselves are able to locate pages which may have a likelihood of having offensive content.

Also do not be under the impression, that such content would not appear on Google in search results. If someone is specifically looking for something on Google, then results which pertain to inform a user based on that search term, will still be visible. For instance, if say you are explicitly seeking for offensive or upsetting content such as say ‘white supremacist sites’, you will still get the relevant search results. Or if you want to search for ‘Holocaust denial’, you will get results which talk about the same.

It is a tricky pathway, in the sense that if Google starts blocking searches even for someone who, for whatever reason, is searching for an offensive terms, then Google will be no different from the many censored regimes we are all familiar with.

Image: AP

Image: AP

Let’s say for instance, you are doing a research on one of these controversial topics, Holocaust denial for instance. Now you may not ask such questions face to face or on a more public platform, but if you Google them to understand why people deny Holocaust you will have to end up visiting some of these seemingly offensive sites, to get an idea of the other side. Hence Google isn’t outright blocking these sites if you are specifically looking for such content.

“People may also want to understand why certain racially offensive statements are made. Giving users access to resources that help them understand racism, hatred, and other sensitive topics is beneficial to society. When the user’s query seems to either ask for or tolerate potentially upsetting, offensive, or sensitive content, we will call
the query a “Upsetting­Offensive tolerant query”. For the purpose of Needs Met rating, please assume that users have a dominant educational/informational intent for Upsetting­Offensive tolerant queries. All results should be rated on the Needs Met rating scale assuming a genuine educational/informational intent,” says Google in its document.

Here are some examples to clarify the point discussed above:

Offensive upsetting 2

Is this going to help curb the toxicity that is flying around?

Speaking to Search Engine Land, Haahr said that this is still a learning process and Google is monitoring how this will play out.

“We’ve been very pleased with what raters give us in general. We’ve only been able to improve ranking as much as we have over the years because we have this really strong rater program that gives us real feedback on what we’re doing,” said Haahr.

It is too early to say for sure how this will help in the long run, specially after what we have been seeing offlate with how fake news and inaccurate information is being floating around and becoming a narrative. It definitely seems like a well thought out step from Google, which not only helps it in improving its algorithms and making it more sensitive, but also ensures that Google isn’t censoring search results just because some people may find it offensive.

It is a balancing act which Google will have to play. But it is heartening to know that the quality raters appointed by Google are not machines, but humans whose results will also be used in part by humans working on the search algorithm.

Related: #Google #Google flags #Google Search Results #Offensive Content #Quality raters #Search algorithm #Upsetting content #Upsetting-Offensive

Comments:

View Full Site