AI ROBOTS LEARNING RACISM, SEXISM AND OTHER PREJUDICES FROM HUMANS, STUDY FINDS

‘These technologies may perpetuate cultural stereotypes’

IAN JOHNSTON SCIENCE CORRESPONDENT

THE INDEPENDENT TECH
Artificially intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.

A massive study of millions of words online looked at how closely different terms were to each other in the text – the same way that automatic translators use “machine learning” to establish what language means.

Some of the results were stunning.

Microsoft AI chatbot posts racist messages
The researchers found male names were more closely associated with career-related terms than female ones, which were more closely associated with words related to the family.

This link was stronger than the non-controversial findings that musical instruments and flowers were pleasant and weapons and insects were unpleasant.

Female names were also strongly associated with artistic terms, while male names were found to be closer to maths and science ones.

There were strong associations, known as word “embeddings”, between European or American names and pleasant terms, and African-American names and unpleasant terms.

Startups spy an opportunity in the power of AI and automation
New AI app promises to transform all your bad selfies into good ones
‘Machine folk’ music composed by AI shows technology’s creative side
Elon Musk to plant computers in human brains to prevent AI uprising
The effects of such biases on AI can be profound.

For example Google Translate, which learns what words mean by the way people use them, translates the Turkish sentence “O bir doktor” into “he is a doctor” in English, even though Turkish pronouns are not gender specific. So, it can actually mean “he is a doctor” or “she is a doctor”.

But change “doktor” to “hemsire”, meaning nurse, in the same sentence and this is translated as “she is a nurse”.

Last year, a Microsoft chatbot called Tay was given its own Twitter account and allowed to interact with the public.

It turned into a racist, pro-Hitler troll with a penchant for bizarre conspiracy theories in just 24 hours. “[George W] Bush did 9/11 and Hitler would have done a better job than the monkey we have now,” it wrote. “Donald Trump is the only hope we’ve got.”

In a paper about the new study in the journal Science, the researchers wrote: “Our work has implications for AI and machine learning because of the concern that these technologies may perpetuate cultural stereotypes.

“Our findings suggest that if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations, some of which can be objectionable.

“Already, popular online translation systems incorporate some of the biases we study. Further concerns may arise as AI is given agency in our society.

“If machine-learning technologies used for, say, résumé screening were to imbibe cultural stereotypes, it may result in prejudiced outcomes.”

The researchers said the AI was not to blame for such “problematic” effects.

“Notice that the word embeddings ‘know’ these properties of flowers, insects, musical instruments, and weapons with no direct experience of the world and no representation of semantics other than the implicit metrics of words’ co-occurrence statistics with other nearby words.”

But changing the way AI learns would risk missing out on unobjectionable meanings and associations of words.

“We have demonstrated that word embeddings encode not only stereotyped biases but also other knowledge, such as the visceral pleasantness of flowers or the gender distribution of occupations,” the researchers wrote.

The study also implies that humans may develop prejudices partly because of the language they speak.

“Our work suggests that behaviour can be driven by cultural history embedded in a term’s historic use. Such histories can evidently vary between languages,” the paper said.

“Before providing an explicit or institutional explanation for why individuals make prejudiced decisions, one must show that it was not a simple outcome of unthinking reproduction of statistical regularities absorbed with language.

“Similarly, before positing complex models for how stereotyped attitudes perpetuate from one generation to the next or from one group to another, we must check whether simply learning language is sufficient to explain (some of) the observed transmission of prejudice.”

One of the researchers, Professor Joanna Bryson, of Bath University, told The Independent that instead of changing the way AI learns, the way it expresses itself should be altered.

So the AI would still “hear” racism and sexism, but would have a moral code that would prevent it from expressing these same sentiments.

Such filters can be controversial. The European Union has passed laws to ensure the terms of AI filters are made public.

For Professor Bryson, the key finding of the research was not so much about AI but humans.

“I think the most important thing here is we have understood more about how we are transmitting information, where words come from and one of the ways in which implicit biases are affecting us all,” she said.

Source: http://www.independent.co.uk/life-style/gadgets-and-tech/news/ai-robots-artificial-intelligence-racism-sexism-prejudice-bias-language-learn-from-humans-a7683161.html?cmpid=facebook-post

Leave a Reply

Your email address will not be published. Required fields are marked *

wz8CSD

Please type the text above:

Read reports by date

December 2017
M T W T F S S
« Nov    
 123
45678910
11121314151617
18192021222324
25262728293031

Most Recent Reports

Circle K manager uses racial slurs

I stopped in this morning to my local gas station to get my cup of

First Person Australian

In 2001 I was brought to Australia from Switzerland. I am Helveti and Self-Born Antesian.

Racism Drink Night Club

I was told I was not allowed in by a man named Rafael for looking

Bigots at prepperforums.net

Just watch these guys. They use a lot of veiled code words and language. They

Most Recent Blogs

African victims of racism in India share their stories

The law of mobs is always there,’ says one student, describing racist attacks on Africans

AI ROBOTS LEARNING RACISM, SEXISM AND OTHER PREJUDICES FROM HUMANS, STUDY FINDS

‘These technologies may perpetuate cultural stereotypes’ IAN JOHNSTON SCIENCE CORRESPONDENT THE INDEPENDENT TECH Artificially intelligent

70,000 Indian Muslim clerics issue fatwa against terrorism

70,000 Indian Muslim clerics issue fatwa against Isis, the Taliban, al-Qaeda and other terror groups

Social Media Star Strangled In ‘Honor’ Killing

“This beautiful woman spent her last moments alive knowing her own brother was strangling her.

Support the Movement

With your little support we can take the movement forward to fight against racism in the World.

Featured Videos

Journey of Man: A Genetic Odyssey (Part 13 of 13)

Top 5 Racist Countries

Total Racist Reports: 1260

  • United States 820 (100%)
  • United Kingdom 114 (14%)
  • Australia 31 (4%)
  • South Africa 26 (3%)
  • Canada 26 (3%)