AI ROBOTS LEARNING RACISM, SEXISM AND OTHER PREJUDICES FROM HUMANS, STUDY FINDS

‘These technologies may perpetuate cultural stereotypes’

IAN JOHNSTON SCIENCE CORRESPONDENT

THE INDEPENDENT TECH
Artificially intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.

A massive study of millions of words online looked at how closely different terms were to each other in the text – the same way that automatic translators use “machine learning” to establish what language means.

Some of the results were stunning.

Microsoft AI chatbot posts racist messages
The researchers found male names were more closely associated with career-related terms than female ones, which were more closely associated with words related to the family.

This link was stronger than the non-controversial findings that musical instruments and flowers were pleasant and weapons and insects were unpleasant.

Female names were also strongly associated with artistic terms, while male names were found to be closer to maths and science ones.

There were strong associations, known as word “embeddings”, between European or American names and pleasant terms, and African-American names and unpleasant terms.

Startups spy an opportunity in the power of AI and automation
New AI app promises to transform all your bad selfies into good ones
‘Machine folk’ music composed by AI shows technology’s creative side
Elon Musk to plant computers in human brains to prevent AI uprising
The effects of such biases on AI can be profound.

For example Google Translate, which learns what words mean by the way people use them, translates the Turkish sentence “O bir doktor” into “he is a doctor” in English, even though Turkish pronouns are not gender specific. So, it can actually mean “he is a doctor” or “she is a doctor”.

But change “doktor” to “hemsire”, meaning nurse, in the same sentence and this is translated as “she is a nurse”.

Last year, a Microsoft chatbot called Tay was given its own Twitter account and allowed to interact with the public.

It turned into a racist, pro-Hitler troll with a penchant for bizarre conspiracy theories in just 24 hours. “[George W] Bush did 9/11 and Hitler would have done a better job than the monkey we have now,” it wrote. “Donald Trump is the only hope we’ve got.”

In a paper about the new study in the journal Science, the researchers wrote: “Our work has implications for AI and machine learning because of the concern that these technologies may perpetuate cultural stereotypes.

“Our findings suggest that if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations, some of which can be objectionable.

“Already, popular online translation systems incorporate some of the biases we study. Further concerns may arise as AI is given agency in our society.

“If machine-learning technologies used for, say, résumé screening were to imbibe cultural stereotypes, it may result in prejudiced outcomes.”

The researchers said the AI was not to blame for such “problematic” effects.

“Notice that the word embeddings ‘know’ these properties of flowers, insects, musical instruments, and weapons with no direct experience of the world and no representation of semantics other than the implicit metrics of words’ co-occurrence statistics with other nearby words.”

But changing the way AI learns would risk missing out on unobjectionable meanings and associations of words.

“We have demonstrated that word embeddings encode not only stereotyped biases but also other knowledge, such as the visceral pleasantness of flowers or the gender distribution of occupations,” the researchers wrote.

The study also implies that humans may develop prejudices partly because of the language they speak.

“Our work suggests that behaviour can be driven by cultural history embedded in a term’s historic use. Such histories can evidently vary between languages,” the paper said.

“Before providing an explicit or institutional explanation for why individuals make prejudiced decisions, one must show that it was not a simple outcome of unthinking reproduction of statistical regularities absorbed with language.

“Similarly, before positing complex models for how stereotyped attitudes perpetuate from one generation to the next or from one group to another, we must check whether simply learning language is sufficient to explain (some of) the observed transmission of prejudice.”

One of the researchers, Professor Joanna Bryson, of Bath University, told The Independent that instead of changing the way AI learns, the way it expresses itself should be altered.

So the AI would still “hear” racism and sexism, but would have a moral code that would prevent it from expressing these same sentiments.

Such filters can be controversial. The European Union has passed laws to ensure the terms of AI filters are made public.

For Professor Bryson, the key finding of the research was not so much about AI but humans.

“I think the most important thing here is we have understood more about how we are transmitting information, where words come from and one of the ways in which implicit biases are affecting us all,” she said.

Source: http://www.independent.co.uk/life-style/gadgets-and-tech/news/ai-robots-artificial-intelligence-racism-sexism-prejudice-bias-language-learn-from-humans-a7683161.html?cmpid=facebook-post

Leave a Reply

Your email address will not be published. Required fields are marked *

wAEMT

Please type the text above:

Read reports by date

August 2017
M T W T F S S
« Jul    
 123456
78910111213
14151617181920
21222324252627
28293031  

Most Recent Reports

Haters in Facebook

A priest have post, in Facebook a pics (https://goo.gl/dnJ936) in a pool with the african

Racist Text

Received text message of a stranger in blackface from a number not in contacts. Share:

Fake news spreading racial hatred on weChat

https://mp.weixin.qq.com/s?__biz=MzA3MzMyMDgxMg==&mid=2652754610&idx=1&sn=0ee79ccd75ca18074af4c88584f518d9&chksm=84f9a2b1b38e2ba7eb4b469595aa2178de2d8fff4c41edffcc41b4034d53b30e16c3d40bde19&mpshare=1&scene=1&srcid=0816y8xaAjCmpX7F0ETywfwi&key=71223cfe280ce256f4d0d682d8cb7894155d478e1814c1364dc4f1b6e9967e9350f0a5913f9c7f73e3407d1bad500d801f2429156383a75ef439125f56a55c05a9a0a56f39df7c99778e6bcfbfdc4048&ascene=0&uin=MTM1MjQzMjYwMA%3D%3D&devicetype=iMac+MacBookPro11%2C4+OSX+OSX+10.12.6+build(16G29)&version=12020810&nettype=WIFI&fontScale=100&pass_ticket=UD87KsweCN2iJsS7ZSTPUJ9Yl2CiJYILV5J9i4NCtn4BA9I7puviHVeDbQNOdDO3 I see this article on China\'s weChat forum. Title \"Urgent! White nationalists threaten to

Racist Incident Redundo Beach

Was walking down main road in Redundo Beach – car pulled up and passenger yelled

Most Recent Blogs

African victims of racism in India share their stories

The law of mobs is always there,’ says one student, describing racist attacks on Africans

AI ROBOTS LEARNING RACISM, SEXISM AND OTHER PREJUDICES FROM HUMANS, STUDY FINDS

‘These technologies may perpetuate cultural stereotypes’ IAN JOHNSTON SCIENCE CORRESPONDENT THE INDEPENDENT TECH Artificially intelligent

70,000 Indian Muslim clerics issue fatwa against terrorism

70,000 Indian Muslim clerics issue fatwa against Isis, the Taliban, al-Qaeda and other terror groups

Social Media Star Strangled In ‘Honor’ Killing

“This beautiful woman spent her last moments alive knowing her own brother was strangling her.

Support the Movement

With your little support we can take the movement forward to fight against racism in the World.

Featured Videos

Teach Me to learn about you

Top 5 Racist Countries

Total Racist Reports: 1194

  • United States 773 (100%)
  • United Kingdom 105 (14%)
  • Australia 29 (4%)
  • South Africa 26 (3%)
  • Canada 25 (3%)