Search this website
Image Credit: Reuters
Everyone’s going big on artificial intelligence (AI) today. From Google to Microsoft to Apple and possibly Samsung, AI is almost the buzzword in the smartphone space as well. But as these systems get more powerful and begin to do more, there is a need to make sure that they are not used by authoritarian regimes and then target certain populations.
At the ongoing South by South West (SXSW) event in Texas, USA, Microsoft Research’s Kate Crawford explained her point on view on the AI scene and how it makes for an ideal excuse to teach these humongous data systems the wrong ideas without any accountability.
The Guardian reported the researcher’s take on artificial intelligence, hinting at the rise of ultra-nationalism, right-wing authoritarianism and fascism, thanks to the widespread use of these systems.
However, it is not the use of these systems, but the way in which human biases are encoded into them that leads to their misuse when they fall into the wrong hands.
The researcher who has founded AI Now, a research community that is focussed on the social impacts of AI, gave the simple example where authors from Shanghai Jiao Tong University in China, claimed to have developed a system that could predict criminality based on someone’s facial features. How did they do it? The machine was fed Chinese government ID photos to analyse the face patterns of criminals and non-criminals in order to predict criminality. Despite the method used, the researchers claimed that the system was free from bias.
“We should always be suspicious when machine learning systems are described as ‘free from bias’ if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.” she explained in her SXSW session titled Dark Days: AI and the Rise of Fascism.
Crawford also hinted that how social media websites have turned into open registries. She pointed out at how research from the Cambridge University has shown that it is possible to predict people’s religions based on what they like on Facebook.
Researchers were able to correctly classify 82 percent of the cases when it came to identifying Christians and Muslims, and the same was also achieved when sifting through profiles for Democrats and Republicans. The study was not new, but conducted back in 2013. AI since then has taken giants leaps.