How AI is combatting racism in web content

As of late, artificial intelligence (AI) has faced intense scrutiny for being on the wrong side of racism. Increasingly, we are becoming aware of the ways in which AI exacerbates the injustices faced by BAME groups, particularly with regard to recognition technology. What’s more, we’ve also learned of how AI and algorithmic bias can put genders at a disadvantage, as Amazon discovered in their recruiting tool that discriminated against women. 

However, one company is turning that narrative on its head by spearheading AI for good. UserWay, a pioneer in innovative website accessibility technologies, has unveiled its AI-Powered Content Moderator tool, which takes action against offensive, harassing, and racist content.

The Content Moderator flags discriminatory, biased, and/or racially charged language content for site administrators to review and approve. What's more, it offers “nuanced alternatives to words and phrases that may be considered sensitive.”

Much of today’s business vocabulary has, unbeknownst to most, racist or discriminatory connotations. For many companies, exercising certainty regarding which words are culturally sensitive can be worrisome. Not only that, but going through a brand’s content with a fine-toothed comb can take time, which is not always afforded; if a website visitor sees something that shouldn’t be phrased in a certain way, then the damage is already done.

Thus, businesses need reinforcement, and it doesn’t come in any better shape or form than the Content Moderator. Ahead of launching the tool, UserWay ran its rule engine across more than 500,000 websites. Of the sites it scanned, 22% contained some form of biased, racially charged or offensive language. Of those:

  • 52% showed instances of racial bias
  • 24% showed instances of gender bias
  • 12% showed instances of age bias
  • 5% had racial slurs
  • 3% showed instances of disability bias

The most flagged terminology for racial bias included “blackmail”, “whitelist”, “black sheep”, “blacklist”, and “black mark.” Those most often flagged for gender bias included “chairman”, “fireman”, “mankind”, “forefather”, and “man-made.” For some, the discriminatory connotations behind these words may not be obvious, which is testament to why businesses need UserWay under their belt. Not only does it carry out its AI-powered capabilities, but it also provides education on the terminology we should be steering clear from.

The tool is also hyper-relevant to the context of today. Already, it has been updated to flag derogatory terminology surrounding COVID-19, such as “Covidiot” or “China virus.”

UserWay stands firm on “equality in all forms.” In particular, Allon Mason, UserWay’s CEO and Founder, said that “We believe accessibility is not limited to equal access for the differently-abled but serves as an extension of greater equality across race, religion and gender.” 

He went on to comment that “The goal of the Content Moderator isn’t to censor or silence but to make web teams aware of problematic language in user-generated content or in content they may have overlooked.”