Skip to Content

Celebrating Canada's 2SLGBTQI+ Communities

Photo by Markus Spiske on Unsplash

Is AI Facial Recognition Trans-Inclusive?

Does AI, specifically facial recognition, fail to recognize transgender people, or misgender them? And, if so…what needs to be done to fix it?

By Warren Urquhart

With every headline about AI being a grandiose proclamation about how utopia or dystopia is around the corner, it’s easy to forget how embedded AI is in our lives already.

If I make a typo, the autocorrect function that fixes the typo is AI. To log into the laptop I am typing this on, my computer unlocked by recognizing my face through its camera – that’s AI. For those of you with iPhones that have facial recognition – you use AI every time you look down to unlock your phone, which the average person does 58 times a day

Despite its already large presence, AI (like most parts of our society) is not built for trans and non-binary folks. A 2019 study by Scheuerman, Paul and Brubaker of the University of Colorado Boulder looked at the accuracy of facial analysis (FA) tech, like facial detection and facial recognition, specifically in how it identified the genders of cis, trans and non-binary people. The results showed a large disparity in accuracy at identifying cis people versus identifying trans and non-binary folks.

The study looked at the technology from Amazon, Clarifai, IBM and Microsoft, and then aggregated all of them to determine their collective accuracy. The overall accuracy was the highest for women (at 98.3%), followed by men (97.6%), transwomen (87.3%) and transmen (70.5%). Agender, genderqueer and non-binary groups had a 0% accuracy rate as, at the time of the study, the relevant FA tech only had binary gender labels, and given that those groups are outside of the gender binary, it was impossible for any of the services “to return a correct classification.”

The possible explanation for this inaccuracy also leads us to the solution. The study surmises that it was likely “the training data used to train FA services [did] not include transgender individuals – at least those who [did] not perform gender in a cisnormative manner.” The reason the study presumes this is because gender labelling, like many AI programs, is a “black box” where we see the output, but don’t understand how the algorithm got to its conclusion. When it comes to transgender folk, the large discrepancy can likely be fixed by simply including more trans people in their datasets when training the FA model. 

For non-binary people, the fix would likely need to be more ambitious. Not only would they have to be included in any training data set, as with trans people, but facial AI technology would have to include the output or label of “non-binary,” “agender” or “genderqueer.” Yes – the study was published in 2019, and FA has likely grown quite a bit since then. But if trans people aren’t included in the models used to train FA AI, and the program isn’t designed to create an output label of “non-binary” – then the issue of FA misgendering non-cis people will stand.

From anecdotal experience, I presume that there have already been advances in this issue since the study. This mostly comes from the experience of seeing more and more forms and surveys include options for gender identity besides “man,” “woman” or “do not wish to disclose.” When governments and large organizations acknowledge and provide the options to identify beyond the gender binary (Canada was the first country to provide census data on trans and non-binary people), this helps set the standard for tech companies needing to be more inclusive in how they work with gender.

Of course, the progress that has likely come since 2019 is not enough. In 2021, non-binary and trans scientists advocated for US governmental agencies to include gender-inclusive options. An app called Giggle, self-described as a social media app for “females,” uses FA technology to determine if a potential user is male or female. If female – they can join. If not, they can’t. While the app used to include transwomen, the CEO of Giggle now openly admits that the app excludes transwomen, through its FA technology from Kairos AI to exclude transwomen.

The results of the Boulder study are very useful for those who want to design FA systems that include the trans community, and for LGBTQ people and allies to push for better FA systems. However, Giggle brings up a much larger point – that perhaps, without the right guardrails, FA technology that is better at correctly identifying non-cis people might actually have catastrophic effects. A side effect of FA systems that are better at recognizing trans and non-binary folks is that those who wish to use FA against those communities now have a more accurate weapon to do so. In its effort to exclude transwomen, Giggle found its AI was excluding cis women as well – through FA that is better at detecting trans people, Giggle’s ability to accurately discriminate would grow.

Not to belittle the harm Giggle is causing the trans community – but the stakes are much higher that one transphobic social network. James Vincent of The Verge wrote an excellent article that details the case that many LGBTQ groups are making: ban automatic gender recognition technology entirely. The article details the scene in Chechnya, where LGBTQ people are being imprisoned by authorities that are not using AI. With FA that can accurately identify LGTBQ people, authoritarians would be able to imprison trans and non-binary folk with more ease. Another example is housing discrimination, a common issue for transgender people – and one that could get worse as some landlords push for facial recognition technology security systems. With the proliferation of anti-LGBTQ bills in the United States, is it really that much of a stretch to suggest that FA technology that accurately recognizes trans and non-binary people will do more harm than good?

I’d argue it’s not a stretch – but on the other hand, abandoning the project of creating trans-inclusive AI or facial recognition technology will exclude those communities from the benefits those technologies can bring. For example, in medicine (where trans people already get the short stick), facial recognition technology has been shown to be apt at detecting rare genetic conditions. Trans folks deserve all the benefits that medicine can bring to cis folks.

Nonetheless, all the medical advancements in the world don’t justify (1) not developing a stronger anti-discrimination legal and technological framework that protects trans folk, and (2) not banning the use of FA technology where the risk of harm to trans, non-binary folk and other vulnerable communities far outweighs any benefits. Developing a strong protective framework will be difficult, but AI developers, technologists and policy-makers have a moral imperative to start working to protect and empower trans and non-binary people, through centring the voices of these vulnerable communities in their technical and political decisions.


WARREN URQUHART is a soon-to-be consumer protection lawyer finishing up the licensing process in Ontario (none of the opinions expressed represent the views of Warren’s employer). When he’s not writing or working, he’s drinking coffee or lifting weights (sometimes at the same time!).

Related Articles

March 25, 2024 / Latest Life

Rachelle & Barb Share Their Inspiring Epilepsy Story To Help Raise Awareness And Break Down Stigmas

March is Epilepsy Awareness Month and March 26th is Purple Day, a day dedicated to increasing awareness about epilepsy around the world

March 22, 2024 / Latest Life

Photo Gallery: CANFAR Legacy Project 2024 Love In Ottawa Event

A snapshot of CANFAR’s recent Legacy Project event that took place at the Trebor Art Gallery in downtown Ottawa…

March 18, 2024 / Latest Life

In Fertility Care, 2SLGBTQI+ Representation Can Make All The Difference

Why inclusivity matters when you start a parenting journey

POST A COMMENT

Your email address will not be published. Required fields are marked *