Machine learning, AI, and other forms of complex algorithms have shown promises in automation. However, with all their potential to advance us from menial tasks, they still are created and developed by people with biases and carry those into their systems. Studies, complaints, and lawsuits are beginning to expose how these are affecting LGBTQ+ communities.

NBC recently published an article stating, “Both PinkNews, a U.K.-based LGBTQ news site, and The Advocate, a U.S.-based LGBTQ news site, saw 73 percent of their positive or neutral content blocked by these lists.” In addition, other platforms like YouTube have continued to face scrutiny for their advertising and content creator policies. In YouTube’s case, these allegedly ban or restrict LGBTQ+ content, while far-right extremists and hate groups are able to capture large audiences.

“Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions… They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”

US Representative Alexandria Ocasio-Cortez

The methods behind these censors are complex but can be broken down simply. In some cases, a single keyword like “lesbian” may be marked as sexually explicit. Other times, an algorithm may make an assumption based on a review, say by someone who may be homophobic that reports the content. When these situations occur, it can take weeks to find a solution, if one is found at all.

These reports are only a few recently known examples. We may still uncover more lawsuits and studies demonstrating bias across other technologies. Silicon Valley and tech titans envision an autopilot world where mundane tasks, and maybe even driving, can be completely automated by information systems. Without a large presence of women, communities of color, and LGBTQ+ people creating technology and critiquing it during development, our automated future may just be automated inequality. 

Leave a Reply