Heyya! *Waves with hand open like Dr. Spock* Ever since I attended the first BCM 325 Seminar of the Autumn session, the concept of a ‘novum’ has intrigued me. Not suprisingly, a search through instagram reveals that there is indeed a large audience who also enjoy exploring various elements within the sci-fi and speculative genre […]
Every parent worries about their child. In an age of mobile phones, microchips and other advanced technology that can be utilised to pin point locations, why would parents not track their children? We are in a world where cybernetics and growing technologies supply us with the power of knowledge and information beyond our own physical, human capabilities. What then is made of the ethical implications of ‘stalking’ a child, their internet usage and willingly allowing ourselves to be programmed by this technology into thinking that this kind of behavior is normal?
Cyber-cultures refers to “issues and concerns which have arisen as a result of the proliferation of digitally-enabled communication, networked computation and media technologies and internet practices.” (Moore, 2018). Truly within this relationship between a digital and a reality complex, we can identify that technology is making considerable bounds in becoming increasingly prevalent in human activities.
When it comes to artificial intelligence, there are many people who believe that the introduction of artificial intelligence and robots could lead to a dystopian world similar to that portrayed in “Terminator Salvation” (p.s. Terminator Salvation is a terrible film), where robots have enslaved humanity. Whilst not entirely implausible, the threat of unemployment is a much greater moral concern surrounding unemployment, with the World Economic Forum suggesting that as many as 5 million jobs, from 15 developed and emerging economies could be lost by 2020 (Brinded, 2016). In fact, many people are already starting to lose their jobs to machines with self-serve checkouts being a major example of the way machines have been able to do a job, previously undertaken by human employees, but with greater efficiency and lower cost. However, I am more focused on investigating the threat posed by human-like robots, rather than machines in general. Why? Because that’s what society imagines when you mention artificial intelligence. They imagine machines that replicate our human bodies.
In his book ‘Digital Soul: Intelligent Machines and Human Values ‘, Thomas M. Georges hypothesizes how the introduction of sentient beings in society might be received by humans. Georges states that “learning to live with superintelligent machines will require us to rethink our concept of ourselves and our place in the scheme of things” (Georges 2003, pg. 181). This statement raises many philosophical questions, which I will explore in my next blog post alongside an in-depth look at the 2015 film ‘Ex-Machina’. Georges’ statement does, however, imply that unsurprisingly living with robots would cause some conflict and would not be a smooth transition for humans. Having said that, many will say that we are already living amongst various forms of “weak” AI such as Siri or Cotana, smart home devices and the somewhat annoying purchase prediction. However, these are forms of “weak” AI and we are still a long way away from a society where humans co-exist with sentient beings. All we can do, for now, is worry and imagine.
Twitter is all a-flutter about Tay, the racist lady-AI from Microsoft who was taken offline less than a day after her launch. According to her makers, “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” Unfortunately this makes her extremely easy to manipulate and she was quickly transformed into a genocide-loving racist.
Tay is an example of a phenomenon in AI theory: the emergence of a gendered AI.
AI has been described as the mimicking of human intelligence to different degrees: ‘strong AI’ attempts to recreate every aspect, costing much more money, resources and time; while ‘weak AI’ focuses on a specific aspect. Tay, as a female AI targeted towards 18-24 year olds in the US, is very much about communicating with Millennials. In my previous posts, I’ve mentioned a number of AI representations in the media, all of which are gendered, usually as female. Dalke and Blankenship point out “Some AI works to imitate aspects of human intelligence not related to gender, although the very method of their knowing may still be gendered.”
They go on to suggest that the Turing Test “arose from a gendered consideration, not a technological one,” wherein Turing’s original paper proposing this test, the examiner is trying to determine the difference between a man and woman and that the same differentiation process could be applied to humans and AI.
If AI is gendered, then the researchers are proposing there is an algorithm for gender, which in our post-feminist context seems to be oversimplifying the issue. Gender is entirely constructed and would be constructed on the part of the AI in its development in the same way that humans construct and reconstruct their own gender in tandem with their identity.
Tay is a glorified bot that responds to specific stimuli. Perhaps it’s the other way around – AI is a glorified bot designed to respond to stimuli and learn from it.
“Today’s world is full of distributed agencies and virtual potentials, rippling deconstructions and flash-point emergences, all eluding easy categorization or comprehension, at least by means of yesterday’s models. The future is not what it used to be: it is much more unpredictable, dangerous, sly and interesting.” Christopher Vitale, 2013. Networkologies: A Philosophy of Networks for a Hyperconnected Age — A Manifesto. Zero Books, UK.
"Lines of light ranged in the nonspace of the mind, clusters and constellations of data." Neuromancer (@GreatDismal) .