Study Claims (a Highly Sexist, Racist) AI Can Determine Your Sexual Orientation

There are so many problems with this.

Recommended Videos

Stanford University recently published a study which claims that a deep neural network (i.e., artificial intelligence) was better than humans when it came to detecting someone’s sexual orientation. In other words, the developers of this AI say they were able to create a “gaydar” of sorts, as The Guardian puts it. Michal Kosinski and Yilun Wang, the researchers behind the software, loaded it up with 35,000 profile photos pulled from various dating sites, which formed the “seed” for the machine learning. Eventually, it was able to correctly distinguish between gay and straight men 81% of the time, and 74% for women. This stands in stark contrast to humans, who were only able to guess correctly at rates of 61% and 54%, respectively.

As it should, the existence of such a tool has sparked a debate regarding the ethics of facial scanning and whether we should be using machines to answer such questions. Privacy issues aside, it used some pretty problematic methodologies to determine sexual orientation, such as very hetero- and gender binary-normative criteria, namely “grooming style” and features that are “gender atypical.” Basically this means it looked for features that were not prevalent amongst other people of the same gender, like men who appeared effeminate or women who appeared masculine. To be honest, at its best, it comes off as a very reductive, minimalistic look at how gender is performed across a binary, rather than as a reflection of the gender and sexuality spectrums that actually exist.

What’s especially disconcerting is the fact that, according to The Guardian, the software wasn’t tested against such people, or even people of color: “People of color were not included in the study, and there was no consideration of transgender or bisexual people.”

Listen: this sentence right here explains a lot of what’s wrong with tech in general. So much of it is created with this focus on the experience of men, and more often than not, straight white cis men. Considerations for folks who don’t identify as straight, white, cis, or male (or any combination thereof), are too often painted as extraneous, superfluous, or altogether unneeded. They’re an afterthought, an edge case, a patch to be added later, after release. By not accounting for a significant chunk of the population, this “AI” is straight up useless.

Granted, this is likely a first iteration, and will undergo further development. But that’s kind of exactly the problem I allude to just above: the experiences of white people are often placed ahead of others, and in many cases, they’re thought of as the only experience that matters.

Take, for instance, FaceApp, a seemingly harmless bit of novelty software that would digitally manipulate your or your friends’ photos to make them smile, frown, age, or even look like “the opposite sex.” It recently made headlines as it unveiled what’s basically a “blackface filter,” which changes one’s photos to look like they were Black. It did so by applying some pretty stereotypical, racist features to the given photo. Also included in the pack were filters to make yourself look Asian, Indian, or Caucasian. It was a pretty ill-conceived idea, and should’ve been halted right at the get-go. But it wasn’t, and many folks took note, taking to Twitter to demonstrate just how messed up this all was.

I’m sure to the developers, it seemed like an interesting idea at the very least, as they committed development time and cycles to creating the filters. And therein lies the rub: nobody working on it took a moment to think, “Hey, this is kind of messed up.” All that mattered was creating the thing, with nary a consideration as to the experiences of a large swath of people. Incredible, no?

I wish I could say that this is a standalone instance of racism in tech, but as you and I and all of us here know, it’s a problem that’s endemic across the industry as a whole. The myopic view with which the industry looks upon the general public has been written about loads of times, by people who are better writers than I, so I won’t go too much into it here. Suffice it to say, though, that it’s a big fucking problem, and as we delve deeper and deeper into this Information Age, studded with technology, it’s important to address that.

As Amy Langer recently wrote for us, the idea of an objective robot or AI is a myth.

A paper published earlier this year in Science explored how AI systems learn language and semantics. In this paper, the researchers explained that as AI systems acquired language through inputted text, these systems also acquired “imprints of our historical biases.” Co-author Joanna Bryson explained, “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.” Like a parrot that repeats back the dirty words it hears, the systems we create echo what we say about ourselves and each other.

Algorithms, robots, artificial and emergent intelligences, deep learning neural networks—not a single one of them are exempt from the underlying biases and prejudices that come to define our own experiences. Think about it: how hard is it to get someone to acknowledge their privilege or admit that something they did may have been racist or sexist? Does that viewpoint suddenly disappear if and when that person ever creates something supposedly “in their image”—or someone, as in the case of raising kids? (Spoiler: it doesn’t.)

Langer looks upon such revelations with an optimistic (and honestly, refreshing) view, however, having used Portal‘s GLaDOS and Chell as examples of such an interaction. She writes:

As we surround ourselves with increasingly automated systems, we’d do well to examine how our decades of human biases are programmed into the devices that populate our homes, offices, and pockets. Without this examination, we’ll find that the worst of the voices in our heads—voices that, granted, were “programmed” into us by society at large—will find their way into what we create. We could end up compounding misogyny instead of correcting it.

But, don’t despair: this is ultimately hopeful. Just as Chell needed to face the AI that was hounding her in order to destroy it, we would do well to recognize that robots are not some “other.” We should stare at them until we see ourselves reflected back in their chrome surfaces.

If your “innovative tech” doesn’t account for the experiences of the marginalized, then it’s neither innovative nor tech. It’s just another venue through which people are exposed to even more biases, prejudices, and further marginalization, all of which are values that are, fundamentally, the antithesis to what tech is supposed to represent. The world hardly needs your exclusionary software, despite whatever “good intentions” you may have had for it. We are downright drowning in your good intentions.

(image: Flickr/Mercury Redux)

Want more stories like this? Become a subscriber and support the site!

The Mary Sue has a strict comment policy that forbids, but is not limited to, personal insults toward anyone, hate speech, and trolling.—


The Mary Sue is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more
related content
Read Article Men Are Proving the Whole Point of the Man vs. Bear Debate
A mama bear with her cubs
Read Article ‘Cabaret’ Is Back and More Timely Than Ever
the cast of cabaret with joel grey and director rebecca frecknall
Read Article Ryan Gosling’s Iconic ‘Fall Guy’ Red Carpet Surprise Is Incredible Commitment to a Bit
Ryan Gosling at the Fall Guy premiere
Read Article Which ‘Tortured Poets Department’ Song Are You Based on Your Zodiac Sign?
Taylor Swift on the album cover for The Tortured Poets Department
Read Article Foghorn Leghorn Officially Can’t Stand Seeing All These Anime Characters Beat Themselves Up
Edited photo of Foghorn Leghorn talking to Naoto in "Don't Toy With Me, Miss Nagatoro"
Related Content
Read Article Men Are Proving the Whole Point of the Man vs. Bear Debate
A mama bear with her cubs
Read Article ‘Cabaret’ Is Back and More Timely Than Ever
the cast of cabaret with joel grey and director rebecca frecknall
Read Article Ryan Gosling’s Iconic ‘Fall Guy’ Red Carpet Surprise Is Incredible Commitment to a Bit
Ryan Gosling at the Fall Guy premiere
Read Article Which ‘Tortured Poets Department’ Song Are You Based on Your Zodiac Sign?
Taylor Swift on the album cover for The Tortured Poets Department
Read Article Foghorn Leghorn Officially Can’t Stand Seeing All These Anime Characters Beat Themselves Up
Edited photo of Foghorn Leghorn talking to Naoto in "Don't Toy With Me, Miss Nagatoro"
Author
Jessica Lachenal
Jessica Lachenal is a writer who doesn’t talk about herself a lot, so she isn’t quite sure how biographical info panels should work. But here we go anyway. She's the Weekend Editor for The Mary Sue, a Contributing Writer for The Bold Italic (thebolditalic.com), and a Staff Writer for Spinning Platters (spinningplatters.com). She's also been featured in Model View Culture and Frontiers LA magazine, and on Autostraddle. She hopes this has been as awkward for you as it has been for her.