Skip to content

Tag: machine translation

WL4102 Shorter Blog Essay: Machine Translation and Gender Bias

For my shorter blog essay, I decided to look at bias in machine translation with a particular focus on Google Translate and gender. How does Google Translate work? Where can we see gender bias and why does this occur? How does this link to linguistic corpora? These are some questions upon which I hope to reflect in this blog post.

Google Translate was launched in 2006 with the goal of “[breaking] language barriers and [making] the world more accessible”. With over 500 million users and over 100 languages available, the service was seeing more than 100 billion words being translated per day by 2016. These extremely high numbers show what a fundamental part Google Translate plays in translation for many people around the world. The service is an example of Machine Translation (MT) or translation from one natural language to another using a computer. (Source: Ten years of Google Translate)

Google Translate was originally based on statistical machine translation. Phrases were first translated into English and then cross-referenced with databases of documents from the United Nations and the European Parliament. These databases are corpora, e.g. ‘European Parliament Proceedings Parallel Corpus 1996-2011. Although the translations produced were not flawless, the broad meaning could be reached. In November 2016, it was announced that the service would change to neural machine translation, which would mean comparing whole sentences at a time and the use of more linguistic resources. The aim was to ensure greater accuracy as more context would be available. The translations produced through this service are compared repeatedly, which allows Google Translate to decipher patterns between words in different languages. (Source: Google Translate: How does the search giant’s multilingual interpreter actually work?)

Although accuracy has improved, one issue that remains is that of gender. Machine Translation: Analyzing Gender, a case study published in 2013 as part of Gender Innovations at Stanford University, shows that translations between a language with fewer gender inflections (such as English) and a language with more gender inflections (such as German) tend to display a male default, meaning the nouns are shown in the male form and male pronouns are used, even when the text specifically refers to a female. The case study also shows, however, that this male default is overridden when the noun refers to something considered stereotypically female, such as ‘nurse’. I tested gender bias myself on Google Translate by translating the grammatically gender-neutral term ‘the secretary’ from English to German. As can be seen in the photo below, the automatic translation feature shows that the translation of ‘the secretary’ by itself translates automatically to ‘die Sekretärin’ (the secretary, female) but when it is combined with ‘of state’ translates automatically to ‘der Staatsekratär’ (secretary of state, male). This shows a very obvious bias in terms of gender roles. (Sources: Machine Translation | Gendered InnovationsGoogle Translate’s Gender Problem (And Bing Translate’s, And Systran’s)

The above example shows a simple word with the definite article and gives no other context. Therefore, the machine had to rely on frequency and chose the gender most used in translations of this word in the corpora upon which it is based. In the corpora, ‘the secretary’ is more frequently shown to be a female and ‘the secretary of state’ is more frequently shown to be a male. Although Google Translate has moved away from solely statistical translation to neural machine translation, it still displays issues stemming from statistical methods. This 2017 article by Mashable features further examples. In it, a Google spokesperson is quoted as saying over email, “Translate works by learning patterns from many millions of examples of translations seen out on the web. Unfortunately, some of those patterns can lead to translations we’re not happy with. We’re actively researching how to mitigate these effects; these are unsolved problems in computer science, and ones we’re working hard to address.” (Sources: Machine Translation | Gendered InnovationsGoogle Translate might have a gender problem)

The current algorithms involved in Google Translate, which involve using available corpora to translate phrases, present this issue. As corpora show language as it is used, one would hope that the gender balance within them would improve as the world develops more awareness of gender-related issues and gets closer to achieving gender balance. This could then, in turn, have an effect on the translations. The Machine Translation: Analyzing Gender case study suggests a solution that would entail reforming the computer science and engineering curriculum to include “sophisticated methods of sex and gender analysis”, and concludes with a reflection on the complexity of this issue and on the need for new algorithms, understandings and tools. Until then, this issue, as shown by my above quick search, is ongoing. (Source: Machine Translation | Gendered Innovations)

For more content relating to corpora and gender, please see my main blog essay.

Update – December 8, 2018: I have written another blog post relating to this short essay in light of an article published two days ago on Google Blog. You can read this post here.

List of sources:

Leave a Comment