K-Pop Cult Fandoms

Socio-Cultural
2017

The future of technology cannot be predicted but ideally, we can believe it is intended to one day lead us into a world eliminated of biases and inequalities as it attempts to level the playing field. Instead, we’re forced to work with technological advances that are catered towards the same demographic: cisgendered, white people – because like any design, to perform at its best, a target market has to be in place in order to fully understand how or why a certain technology would work. Unfortunately, not everything has been taken into consideration, as they have been flaws in the way technology responds to a range of identities exceeding the target demographic.

One of the first technological flaws were Kodak cameras that weren’t able to photograph dark-skinned people of colour properly because their aperture settings only catered to light-skinned people. By using “Shirley cards” – images of white women – as the standard for colour calibration around the world, white people was literally labelled as “normal”. To make it worse, this oversight was only realised in the 1970’s when furniture and chocolate companies complained about their brown-hued products not photographing well for advertising. Today, these failures range from Word underlining ethnic names to artificial intelligence (AI) personalities adapting extreme Neo-Nazi stances.

Microsoft once attempted to reach out to the younger audience with their AI technology by creating Tay – a female, teen AI who “had no chill”. As a supposedly young millennial, Microsoft had already stored all o f the slang and topics that millennials discuss into her algorithm and programming. She was able to interact with humans through Twitter and Kik, giving users an opportunity to speak to the “life-like” teen. Unfortunately, Microsoft allowed her to “learn” from her interactions and demonstrate her understanding in her responses the longer she was online. Users took advantage of this and started to teach Tay racial slurs, and her responses escalated from “humans are super cool <3” to “bush did 9/11” and “hitler would have done a better job than the monkey have got now. donald trump is the only hope we’ve got”. While technology itself is not to blame for these statements, it was too easily manipulated into the voice of a Neo-Nazi, and reflected the bias that can be spread online.
Another example of racial bias is the incident in which Google Images presented different results when asked to search for “three white teenagers” and “three black teenagers”. In a viral video online, a man searched these two phrases and found that the white teens were portrayed as very happy, clean cut trios, while the Black teens were presented in crude mugshots. Racial bias is clear through Google’s interpretation and connotations in the change of the race in each search – questioning whether or not Google can be “just a neutral bystander”, or have the people behind the programming actually influence the biases of the search engine? The answer is yes, they clearly have.

Facial recognition is another technological evolution that has been largely faulty. Facial recognition is based on algorithms which set a benchmark that alights with the unconscious bias of the designer. Joy Buolamwini – a graduate researcher at MIT and founder of the Algorithmic Justice League – found that when she was experimenting with facial recognition in her previous work, she was embarrassed by the tech as she was unrecognisable to the algorithm, despite it working flawlessly with her lighter-skinned co-workers. She even tried wearing a white mask and found that it actually recognised her a lot better than her actual face. Thus, through her work, she’s attempting to change the way it works by collating more data on the facial features and details of people of colour, as she found that designers who create the algorithms tend to focus more on the variety and details of white faces. Another failed example of this piece of technology is when a man named Richard Lee attempted to renew his New Zealand passport through an automated system, which rejected his passport photo due to its assumption that “his eyes were closed” – an error that does not meet its criteria. When he enquired about the image to passport officials, they simply shrugged it off and claimed that the photos failed to process due to the “shadows in his eyes and uneven lighting”. He then attempted to apply for his passport again, but it was only successful after four attempts, and even then had to be manually proven by an official first, rather than an automated system. With airport security becoming more thorough with screening, the process has made the security protocol a lot longer than usual.

These discriminative technologies are damaging our psychological trust in machines as they are not adapting to the various identities of the world. Technology has always been catered to the cis-white demographic as they are the “ideal” target market and are the “most populated” in the world, thus giving designers an excuse to simply ignore minorities. As we continue to rely on technology as a vital part of our way of life, we should be constantly questioning its “objective logic” and treat it as an extension of our own biases – as well as addressing the technological biases in their programmers and creators.