Skip to Content

Celebrating Canada's 2SLGBTQI+ Communities

GLAAD Is Training Google AI To Be Less Homophobic

Because technology shouldn’t have prejudices…

The thing about artificial intelligence is that, although it runs on algorithms, it’s also designed to learn independently—and unfortunately, the internet is full of prejudiced opinions, including homophobia. That’s why GLAAD is partnering with Google to help them build an AI that doesn’t discriminate against the LGBT community.

In 2017, Google revealed a new software called Cloud Natural Language API that was built to help businesses test their messages and rate them on a scale from negative to positive. It ended up having a considerably negative reaction to words and phrases that are about homosexuality. For example, the AI rated the phrase “I’m straight” a 0.1 and the phrase “I’m homosexual”, a -0.4.

This isn’t the first time an AI turned out to accidentally be like the person you block on social media after you’re sick of reading their awful opinions. Last year, Microsoft released (and promptly apologized for) an AI named Tay who was supposed to be a chat bot that would learn how to have a believable conversation by analyzing and imitating other conversations on the internet. Turned out letting an AI loose on the internet without a filter wasn’t the best idea. The dark parts of humanity quickly took over and Tay turned into a homophobic, racist, antisemitic, Holocaust-denying Trump supporter.

Alphabet, Google’s parent company, wants to work on ending biases in AI, so they’ve announced they’ll be working with GLAAD to make sure future artificial intelligence is sensitive to LGBT users. Because content related to the queer community has a tendency generate hateful comments on the internet, algorithms learn to process LGBT phrases negatively, so extra attention has to be paid to making sure that doesn’t happen.

Working with Jigsaw, a division of Google that creates tools that deal with abusive comments, GLAAD plans to train future AI to recognize the difference between slurs against LGBT people and legitimate terms. The plan was announced at SXSW and Jigsaw product manager CJ Adams explained that their “mission is help communities have great conversations at scale. We can’t be content to let computers adopt negative biases from the abuse and harassment targeted groups face online.”

Obviously the solution can’t be to suppress all LGBT-related content, so Jigsaw should help AI determine the right kind of language and tone to use without taking on the negative opinions that still litter the web.

“AI has the potential for amazing benefits, but also has the potential to widen social divisions and further harm marginalized communities like LGBTQ people,” Jim Halloran, chief digital officer at GLAAD, told an audience at SXSW. “That is why it is crucial that we are collaborating with important organizations like Google to build inclusive AI that accelerates acceptance for all people.”

Related Articles

March 25, 2024 / Latest Life

Rachelle & Barb Share Their Inspiring Epilepsy Story To Help Raise Awareness And Break Down Stigmas

March is Epilepsy Awareness Month and March 26th is Purple Day, a day dedicated to increasing awareness about epilepsy around the world

March 22, 2024 / Latest Life

Photo Gallery: CANFAR Legacy Project 2024 Love In Ottawa Event

A snapshot of CANFAR’s recent Legacy Project event that took place at the Trebor Art Gallery in downtown Ottawa…

March 18, 2024 / Latest Life

In Fertility Care, 2SLGBTQI+ Representation Can Make All The Difference

Why inclusivity matters when you start a parenting journey

Comments

1 Comment

POST A COMMENT

Your email address will not be published. Required fields are marked *