Home / Life  / GLAAD Is Training Google AI To Be Less Homophobic

GLAAD Is Training Google AI To Be Less Homophobic

Because technology shouldn’t have prejudices…
 
The thing about artificial intelligence is that, although it runs on algorithms, it’s also designed to learn independently—and unfortunately, the internet is full of prejudiced opinions, including homophobia. That’s why GLAAD is partnering with Google to help them build an AI that doesn’t discriminate against the LGBT community.
 
In 2017, Google revealed a new software called Cloud Natural Language API that was built to help businesses test their messages and rate them on a scale from negative to positive. It ended up having a considerably negative reaction to words and phrases that are about homosexuality. For example, the AI rated the phrase “I’m straight” a 0.1 and the phrase “I’m homosexual”, a -0.4.
 
This isn’t the first time an AI turned out to accidentally be like the person you block on social media after you’re sick of reading their awful opinions. Last year, Microsoft released (and promptly apologized for) an AI named Tay who was supposed to be a chat bot that would learn how to have a believable conversation by analyzing and imitating other conversations on the internet. Turned out letting an AI loose on the internet without a filter wasn’t the best idea. The dark parts of humanity quickly took over and Tay turned into a homophobic, racist, antisemitic, Holocaust-denying Trump supporter.
 
Alphabet, Google’s parent company, wants to work on ending biases in AI, so they’ve announced they’ll be working with GLAAD to make sure future artificial intelligence is sensitive to LGBT users. Because content related to the queer community has a tendency generate hateful comments on the internet, algorithms learn to process LGBT phrases negatively, so extra attention has to be paid to making sure that doesn’t happen.
 
Working with Jigsaw, a division of Google that creates tools that deal with abusive comments, GLAAD plans to train future AI to recognize the difference between slurs against LGBT people and legitimate terms. The plan was announced at SXSW and Jigsaw product manager CJ Adams explained that their “mission is help communities have great conversations at scale. We can’t be content to let computers adopt negative biases from the abuse and harassment targeted groups face online.”
 
Obviously the solution can’t be to suppress all LGBT-related content, so Jigsaw should help AI determine the right kind of language and tone to use without taking on the negative opinions that still litter the web.
 
“AI has the potential for amazing benefits, but also has the potential to widen social divisions and further harm marginalized communities like LGBTQ people,” Jim Halloran, chief digital officer at GLAAD, told an audience at SXSW. “That is why it is crucial that we are collaborating with important organizations like Google to build inclusive AI that accelerates acceptance for all people.”
 

POST TAGS:
NO COMMENTS

POST A COMMENT