Skip to content

Tag: gender

Update: Google Translate and Gender Bias

For my shorter blog essay, I wrote a post called Machine Translation and Gender Bias. In it, I discussed the evidence of gender bias on Google Translate. On December 6, 2018, an article that is highly relevant to this blog post was published on Google Blog. I wanted to write another blog post to discuss its content and implications. 

The article, titled ‘Reducing gender bias in Google Translate’ was written by James Kuczmarski, Product Manager at Google Translate. It details a new development in the service that aims to “[address] gender bias by providing feminine and masculine translations for some gender-neutral words”. When translating a single word such as ‘surgeon’ from English into French, Italian, Portuguese or Spanish, or when translating phrases from Turkish into English, the article claims that both the feminine and masculine translation will be displayed. While the German example in my original blog post is, thus, still not covered by this new feature, Kuczmarski states that Google Translate plan to cover more languages in the future. (Source: Reducing gender bias in Google Translate)

I decided to test the new feature out in another language that I study, Spanish. Firstly, I entered the term ‘secretary’: 

Google Translate did indeed provide the masculine and feminine form of the noun in Spanish. However, when I added ‘the’ before the noun, it was clear that the service has not fully solved the gender bias issue:

When the definite article is placed before the noun, the translation still automatically becomes ‘la secretaria’, the feminine form. 

It is great to see that efforts are being made to reduce the amount of gender bias on Google Translate. However, the limited number of languages covered by the new system and the fact that a quick test that I carried out showed an obvious flaw in it (that it does not function with the definite article), it is clear that there is still much progress to be made. 

List of Sources:

Leave a Comment

WL4102 Shorter Blog Essay: Machine Translation and Gender Bias

For my shorter blog essay, I decided to look at bias in machine translation with a particular focus on Google Translate and gender. How does Google Translate work? Where can we see gender bias and why does this occur? How does this link to linguistic corpora? These are some questions upon which I hope to reflect in this blog post.

Google Translate was launched in 2006 with the goal of “[breaking] language barriers and [making] the world more accessible”. With over 500 million users and over 100 languages available, the service was seeing more than 100 billion words being translated per day by 2016. These extremely high numbers show what a fundamental part Google Translate plays in translation for many people around the world. The service is an example of Machine Translation (MT) or translation from one natural language to another using a computer. (Source: Ten years of Google Translate)

Google Translate was originally based on statistical machine translation. Phrases were first translated into English and then cross-referenced with databases of documents from the United Nations and the European Parliament. These databases are corpora, e.g. ‘European Parliament Proceedings Parallel Corpus 1996-2011. Although the translations produced were not flawless, the broad meaning could be reached. In November 2016, it was announced that the service would change to neural machine translation, which would mean comparing whole sentences at a time and the use of more linguistic resources. The aim was to ensure greater accuracy as more context would be available. The translations produced through this service are compared repeatedly, which allows Google Translate to decipher patterns between words in different languages. (Source: Google Translate: How does the search giant’s multilingual interpreter actually work?)

Although accuracy has improved, one issue that remains is that of gender. Machine Translation: Analyzing Gender, a case study published in 2013 as part of Gender Innovations at Stanford University, shows that translations between a language with fewer gender inflections (such as English) and a language with more gender inflections (such as German) tend to display a male default, meaning the nouns are shown in the male form and male pronouns are used, even when the text specifically refers to a female. The case study also shows, however, that this male default is overridden when the noun refers to something considered stereotypically female, such as ‘nurse’. I tested gender bias myself on Google Translate by translating the grammatically gender-neutral term ‘the secretary’ from English to German. As can be seen in the photo below, the automatic translation feature shows that the translation of ‘the secretary’ by itself translates automatically to ‘die Sekretärin’ (the secretary, female) but when it is combined with ‘of state’ translates automatically to ‘der Staatsekratär’ (secretary of state, male). This shows a very obvious bias in terms of gender roles. (Sources: Machine Translation | Gendered InnovationsGoogle Translate’s Gender Problem (And Bing Translate’s, And Systran’s)

The above example shows a simple word with the definite article and gives no other context. Therefore, the machine had to rely on frequency and chose the gender most used in translations of this word in the corpora upon which it is based. In the corpora, ‘the secretary’ is more frequently shown to be a female and ‘the secretary of state’ is more frequently shown to be a male. Although Google Translate has moved away from solely statistical translation to neural machine translation, it still displays issues stemming from statistical methods. This 2017 article by Mashable features further examples. In it, a Google spokesperson is quoted as saying over email, “Translate works by learning patterns from many millions of examples of translations seen out on the web. Unfortunately, some of those patterns can lead to translations we’re not happy with. We’re actively researching how to mitigate these effects; these are unsolved problems in computer science, and ones we’re working hard to address.” (Sources: Machine Translation | Gendered InnovationsGoogle Translate might have a gender problem)

The current algorithms involved in Google Translate, which involve using available corpora to translate phrases, present this issue. As corpora show language as it is used, one would hope that the gender balance within them would improve as the world develops more awareness of gender-related issues and gets closer to achieving gender balance. This could then, in turn, have an effect on the translations. The Machine Translation: Analyzing Gender case study suggests a solution that would entail reforming the computer science and engineering curriculum to include “sophisticated methods of sex and gender analysis”, and concludes with a reflection on the complexity of this issue and on the need for new algorithms, understandings and tools. Until then, this issue, as shown by my above quick search, is ongoing. (Source: Machine Translation | Gendered Innovations)

For more content relating to corpora and gender, please see my main blog essay.

Update – December 8, 2018: I have written another blog post relating to this short essay in light of an article published two days ago on Google Blog. You can read this post here.

List of sources:

Leave a Comment

WL4102 Main Blog Essay: Critical Discourse Analysis and Corpora – A Corpus-Based Project

In my post on Concordances and Voyant Tools, I touched on the use of corpora in discourse research. In this extended blog post, I will explore this idea further, by looking at the connectedness of critical discourse analysis (CDA) and linguistic corpora, using Sketch Engine to study two corpora, one in Spanish and the other in German.

Introduction

CDA is an approach to studying written and spoken communication and the relationship between this communication and society. Van Dijk states that a central focus of CDA is “(group) relations of power, dominance and inequality and the way these are reproduced or resisted by social group members through text and talk” (van Dijk 2). It emerged from different areas of linguistics, including text linguistics and sociolinguistics, and it relates to several modules that we have studied as part of the BA World Languages programme, particularly WL2102: Introduction to Semiotics. The approach is multidisciplinary and draws on methodological approaches that are effective in examining forms of social inequality, such as inequality based on class, sexuality and religion. (Sources: Critical Discourse Analysis: Theory and Interdisciplinarity: pages 11-15, Aims of Critical Discourse Analysis)

As language forms such an important part of the approach, I wanted to closer examine how we can see power structures relevant to CDA in linguistic corpora, and thus, observe how corpus linguistics and CDA connect. I decided to focus on power structures relating to skin colour and gender, using a German-language corpus to examine skin colour and a Spanish-language corpus to examine gender.

Methodology

For my investigation, I used Sketch Engine’s Word Sketch Difference feature, which compares a set of collocates for one lemma in a certain corpus to a set of collocates for another lemma within the corpus. Each lemma is given a colour (red or green) and the collocates that tend to combine with each one are given the same colour. Collocates in white tend to combine with both. If a collocate is shown in dark green or dark red, the collocation is stronger, meaning that the collocate combines far more often with the lemma of that colour and far less often with the other lemma. (Source: Word Sketch Difference lesson | Sketch Engine)

To look at group inequalities in a simple way, I decided to use lemmas that represent an opposing power relationship. It is important to note that these terms are not binary oppositions but I see them as opposing in terms of societal power structures. In the German-language corpus, I searched the lemmas ‘schwarzhäutig’ (black-skinned) and ‘weißhäutig’ (white-skinned), drawing on the amount of discrimination historically and presently faced by people of colour. In the Spanish-language corpus, I searched the lemmas ‘mujer’ (woman) and ‘hombre’ (man) based on the gender discrimination frequently experienced by women living in a patriarchal society.

The German-language corpus used was the German Web 2013 corpus (deTenTen13), which contains over 16 billion words. The Spanish-language corpus used was the Spanish Web 2018 corpus (esTenTen18), which contains over 17 billion words and two subcorpora for European Spanish and American Spanish.  Both of these corpora are made up of collected web-based texts. (Sources: deTenTen – German corpus from the web | Sketch EngineesTenTen – Spanish corpus from the web | Sketch Engine

Results

Skin colour in the German Web 2013 corpus:

The result of my search can be seen here

I decided to look at oppositions relating to skin colour in the German-language corpus for a specific reason: when using the Word Sketch Difference feature on Sketch Engine, the user can only enter a lemma. This means that one cannot enter a term such as ‘black person’ or ‘white person’. As the colours ‘white’ and ‘black’ often refer to any object with that colour, this was also not a helpful search. In German, the adjectives ‘schwarzhäutig’ (black-skinned) and ‘weißhäutig’ (white-skinned) are used, which means they can function as a lemma automatically connected to skin colour.

Although the search only results in one column due to the infrequency of the words in the corpus, this column alone gives us plenty of information. The three nouns that are shown to be modified by the adjective ‘schwarzhäutig’ six times or over and that are never modified by ‘weißhäutig’ in the corpus are ‘Hüne’ (giant/hulk), ‘Bastard’ (bastard) and ‘Afrikaner’ (African male). These collocates give us an impression of the language used in texts online around the word ‘schwarzhäutig’. The appearance of the word ‘Bastard’ shows how repeatedly negative some of this language can be. The presence of the word ‘Afrikaner’ also shows an association between black skin and African males, which is of course commonly problematic for people of colour from countries outside of Africa, as exemplified by the social media campaign by CNN ‘‘No, where are you really from?’‘.

These results contrast to ‘Amerikaner’ (American male), ‘Europäer’ (European male), ‘Fremde’ (stranger, foreign person) and ‘Blondine’ (blonde woman), which are words shown to only collocate with ‘weißhäutig’. Although these terms also show positioning of skin colour in terms of country of origin or nationality, no words such as ‘Bastard’ appear, showing the power structure.

Gender in the Spanish Web 2013 corpus

The result of my search can be seen here

The words ‘mujer’ and ‘hombre’ appear more frequently in the Spanish-language corpus compared to ‘weißhäutig’ and ‘schwarzhäutig’ in the German corpus.

The two verbs that mainly collocate with ‘mujer’ rather than ‘hombre’ as an object are ‘embarazar’ (to impregnate) and ‘violar’ (to violate/rape). At the bottom of the column, we can see that a verb that commonly collocates with ‘hombre’ rather than ‘mujer’ as an object is ‘armar’ (to arm). These collocates show ‘hombre’ as associated with weapons and ‘mujer’ as a receiver of violence, which shows the power structure. This is furthered when we consider the verb that mainly collocates with ‘mujer’ as a subject: ‘sufrir’ (to suffer).

By examining the column that shows collocates involving the preposition ‘sin’ (without), we can see that ‘mujer’ is associated with collocates such as ‘sin pareja’ (without a partner) and ‘sin hijo’ (without a child), which also shows certain expectations surrounding the role of women.

Conclusion

As a central focus of critical discourse analysis is group relations of power as shown through language, a corpus analysis can help greatly. Although my above examination of two corpora on Sketch Engine only consisted of a short investigation, it provided me with results that showed that power structures can be seen in corpora by examining collocates. In the German-language corpus, the word ‘Bastard’ showed the negativity surrounding the term ‘schwarzhӓutig’ and, in the Spanish-language corpus, ‘mujer’ was shown to receive violence and involve societal expectations surrounding partnership and children. In this sense, a corpus-based study relates to the central focus of CDA in studying the reproduction of power relations through communication. I feel that this information will be helpful for me going forward, especially as I will be taking a World Languages module next semester called WL4101: Language and Power. This task has shown me how relevant corpora are in my studies as a language student.

For more content relating to corpora and gender, please see my shorter blog essay.

List of sources:

Leave a Comment
css.php