The following is an essay i produced for my research in the subject of emerging media issues (BCM310) as a student of the University of Wollongong. I considered it relevant to the topic of cybercultures as well therefore i am sharing it here – I am by no means an authority on the matter of superintelligence, however it is a topic which intrigued me. Any comments or feedback, you can reach me at @samhazeldine.
Transcript:
“What will Artificial Superintelligence mean for Human life: A conceptualisation of the coming technological singularity & it’s impact on human existence”
Introduction
Throughout the last century in popular culture there has been representation of superintelligent or human level A.I with varying sense of morality dating back to cinema of the late 1920’s. These representations have forged popular discourses around advanced A.I and their role as catalysts, creating a dichotomy of thought towards a dystopian or utopian future beyond the singularity. Academic understanding suggests we utilise cautionary dystopian ideals to reinforce the notion of prevention of uncontrollable A.I. growth. This is assuming our technological development reaches a degree whereat deep learning aided by quantum computing is efficient and reliable, following which the singularity can unfold.
Through careful analysis of the works produced by philosophers and theorists such as: I.J Good, Ray Kurzweil, Nick Bostrom and others – this piece will discuss the potential for artificially superintelligent beings to lead us towards a bright utopian future or a uncertain dystopian future where we survive as relics of a bygone era.
Developing the notion of ‘The Singularity’
The original concept of a technological singularity was set up by mathematician; Alan Turing in the 1950’s:
It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control. (Turing, 1952)
This somewhat gloomy prediction plays on the developing notion of an eventual dystopia which in the years since Turing’s expression, has been reinforced by popular culture with films such as Blade Runner (1982), Terminator (1984) and I Robot (2004) reinforcing these ideas of machines in control.
A contemporary and coworker of Turing’s; I.J Good is another theorist which provides an important theory known as the ‘intelligence explosion’ (I.J. Good, 1965). This hypothesis details how at the point of achieving superintelligence, A.I will be able to build more sophisticated computers and this feedback loop reaches a speed where innovation is incomprehensible by current standards and creates unlimited intellectual potential beyond our reaches of control.
This idea of gradual, exponential increasing in computational potential is the basis for Moore’s Law which follows a formula which calculates that the power of computing and integrated circuits doubles each year.
A trend which author and computer scientist Ray Kurzweil applies to the potential of artificial superintelligence, driving innovation to reach the singularity by 2045:
$1000 buys a computer a billion times more intelligent than every human combined. This means that average and even low-end computers are vastly smarter than even highly intelligent, unenhanced humans. (Kurzweil, 2006)
With regards to impact, Kurzweil reimagined the phenomenon of the singularity as being ‘neither utopian nor dystopian but having an irreversible impact on our lives as humans, transforming concepts we rely on to give meanings to our lives’
Despite the future beyond the singularity being largely debated, there is no doubt among those who study A.I. that the singularity will occur, it is only a matter of achieving the level of sophisticated computing necessary.
Superintelligent A.I. by 2045
To achieve a point of singularity within the timeframe ascribed to by several modern theorists including Kurzweil and also Swedish philosopher Nick Bostrom who states that:
There is more than 50% chance that superintelligence will be created within 40 years, possibly much sooner. By “superintelligence” I mean a cognitive system that drastically outperforms the best present-day humans in every way… (Bostrom, 1997)
This opinion, like Kurzweil’s should be considered as just that, an opinion, however as is the case with all visionaries the degree of credence which can be placed on their ideas require further, deep examination. By deconstructing the ability to fulfill such a prediction and what it might require happens in the next 20-30 years to reach this point, it can be better understood what the likelihood and consequences may be of this intelligence explosion occurring. The concept of deep learning is a key factor in the progression towards human level artificial intelligence.
Deep learning is essentially a computer’s ability to capture information from various sources including inputs by users, analysis of big data and store this information in neural networks. This is similar to the functioning of the human neural/memory networks created in our brains, however this method in machines such as computers is not limited to physical space like that of the human cranium thanks to ‘the cloud’.
For example, through programs such as GoogleDeepmind, experts were able to utilise deep learning techniques to teach their AlphaGo A.I. how to defeat the current European champion of a board game known as Go – a 2,500-year-old game that’s exponentially more complex than chess (Metz, 2016). Such an achievement is a clear-cut example of the early potential in deep learning technology, moreover this method of machine learning is also utilised on a consumer scale in the form of Netflix entertainment and Amazon purchase suggestions with benefit to both audience and business.
Running in congruence with this development of deep learning technology is the race to develop a stable, usable and reliable quantum computer. Quantum computing involves the processing of superpositioned qubits of data and applied algorithms, which is then used to solve complex problems potentially much faster than traditional binary computers. With current iterations in their infancy, examples such as the cutting edge: D-Wave 2000Q 2048-Qubit computer are the size of a small bathroom and cost $15 million USD (Temperton, 2017). Despite this, experts at the Google A.I. innovation laboratory have led the surge in development of this potential into results, with Google’s director of engineering claiming in 2015 after a collaborative research project with NASA and USRA: “What a D-Wave does in a second would take a conventional computer 10,000 years to do…”(Manners, 2015). However; academics, scientists and philosophers alike concur that this technology still requires significant development in usability and general optimisation to reach anything resembling practical application.
In an attempt to speed up the optimisation and usability of their computers D-Wave Systems Inc. has introduced Qbsolv, an open-source software designed for anyone with an internet connection to experiment and attempt to solve optimisation (QUBO) problems unique to the quantum computer through a simulation on traditional computers or on one of their own systems. The open source community has been a tremendous driver for several technologies such as Android OS, WordPress and Linux OS, assisting these programs in removing bugs and optimisation inspiring the creation of the Qbsolv software for users to tangle with. An action which would please the authors of The Journal of Machine Learning Research 8, by way of their 2007 paper concluded:
Researchers in machine learning should not be content with writing small pieces of software for personal use. If machine learning is to solve real scientific and technological problems, the community needs to build on each others’ open source software tools. (Sonnenburg et al., 2007)
By utilising this inherently collaborative action in development, quantum processing capability will continue on it’s exponential upward trajectory. Then, by applying this sophisticated method of computing to the equally exciting deep learning potential of machines, the idea that superintelligent artificial life is more than 30 years away is scarcely believable. Thus Kurzweil’s prediction of 2045 doesn’t appear to be outside the realm of possibility – So what does this mean for humans beyond 2045?
Planning for Singularity
Regardless of timeline, there is a universal truth among all well-versed researchers that experts will achieve superintelligent A.I. at some moment in the coming decades. At this point of Singularity, if events unfold as I.J Good hypothesised in 1960 we should share more concern:
Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. (I.J Good, 1965)
There is an inherent relevance to these sentiments due to the nature of deep learning among A.I. and although an intelligence ‘explosion’ is a fairly dramatic term the end result could quite possibly be the same, although more gradual – but does our intellectual inferiority necessarily determine our place ‘under machines control’ as Turing foreshadowed?
Perhaps a better angle of enquiry is to consider why a number of researchers and industry leaders reflect the perception that we have ‘no need to be nervous’ about the future after superintelligent A.I. as though we will somehow be able to control these machines or simply ‘unplug’ them as notable software engineer Grady Booch expresses:
We are not building A.I. that control the weather, that direct the tides, that command us capricious, chaotic humans. And furthermore, if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us. And in the end — don’t tell Siri this — we can always unplug them (Booch, 2017).
These ideas are problematic in more than one way; for example: the level of credence placed on the integration of human values and laws into the psyche of a superintelligent A.I. is too high, a view which is shared by the likes of Facebook founder and CEO; Mark Zuckerberg (Dopfner and Welt, 2016). It is a naive anthropomorphist assumption that once superintelligent A.I. begin to create other, more sophisticated machines that our value system won’t filter out gradually through each iteration, much like the initial message in a game of chinese whispers. Booch ends with reflection on the notion that this point of our technological development is far away and that we are being distracted from more ‘pressing issues’ in society.
This lack of mindfulness surrounding the potential consequences of superintelligence concerns those who advocate for an oversight of the rapid A.I. development, in particular Philosopher and Neuroscientist; Sam Harris who makes one point in particular which resonates powerfully:
No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely (Harris, 2016).
The final sentence, although dark and speculative, is an accurate assessment of the state of affairs. For example: the rate at which both Google’s Deepmind and quantum computing/A.I. programs are advancing have resulted in some anxiety as development exists largely unregulated. Granted this allows unencumbered freedom of innovation thus speeding up development, however these companies driving forward are taking an incredible risk if superintelligent A.I. exists without proper safeguards in place – Bostrom likens this to children playing with a bomb. (Bostrom, 2013)
Such safeguards could be as simple as determining what jobs will exist and what will be redundant once humans are replaced by A.I. – a process which has already been undertaken in the field of manufacturing. A significantly more complex consideration would be to reorganise the social structure in areas such as: government, education and business management. This will become necessary as the efficiency and overall output of superintelligent A.I. is naturally higher, thus having these machines in roles such as an educator, or in an organisational position will become commonplace.
There has been progress towards safeguarding the development of superintelligence, with Business Magnate and Futurist Elon Musk as well as several other technology moguls generating a project called OpenAI in 2015, this is a research company aimed at ensuring ‘friendly A.I.’ The way OpenAI plans to achieve their utopian mission is by utilising the cautionary predictions from the likes of Stephen Hawking, Stuart Russell and Nick Bostrom who believe that entering the singularity unprepared is existential suicide – and putting A.I. source code into the open source community for widespread ubiquitous access. This method seems counter-intuitive, however by placing the same technology in the hands of everyone, it takes the potential power out any particular individual company or agency. Co-chairman of OpenAI, Sam Altman explains:
Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think it’s far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else (Levy, 2015).
Despite these noble intentions it is easy to see how this utopian mission could be laid by the wayside by some form of collaboration between like-minded ‘bad actors’, crucially though, this is a step in the right direction. With the progress which will unfold over the coming decades, it is to the benefit of all humankind that visionaries such as Musk, Kurzweil and Bostrom continue to review and address the risks surrounding superintelligent A.I. development – to remove the possibility of an existential crisis.
Conclusion
In the thorough examination of the works of philosophers, scientists, experts and businesspeople, there can be no doubt that the singularity will occur, speculation as to when and how are generally subjective predictions based on computational trends or isolated empirical research. However, there is even less certainty beyond the singularity, thus it is essential to apply a cautious, scientific approach to instilling values, ethics and integrity in our original iterations of superintelligent A.I. – thus responding directly to Harris’ and Bostrom’s anxieties.
An idealistic future perspective would be that instead of existing in extremes, such as the dystopian wasteland in Blade Runner or have a race of subservient robot slaves – but rather live in an environment where humans coexist with A.I. in a collaborative effort reaching common objectives – an ideal which will take some serious planning.
References
- Booch, G. (2017). Transcript of “Don’t fear superintelligent AI” – Ted Lecture. [online] Ted.com. Available at: https://www.ted.com/talks/grady_booch_don_t_fear_superintelligence/transcript?language=en [Accessed 1 Jun. 2017].
- Bostrom, N. (1997). Predictions from Philosophy. [online] Nickbostrom.com. Available at: http://www.nickbostrom.com/old/predict.html [Accessed 1 Jun. 2017].
- Bostrom, N. (2013). Superintelligence. 1st ed. Oxford: Oxford University Press.
- Dopfner, M. and Welt, D. (2016). Mark Zuckerberg talks about the future of Facebook, virtual reality and artificial intelligence. [online] Business Insider. Available at: http://www.businessinsider.com/mark-zuckerberg-interview-with-axel-springer-ceo-mathias-doepfner-2016-2?IR=T [Accessed 1 Jun. 2017].
- Etzioni, O. (2016). Most experts say AI isn’t as much of a threat as you might think. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/602410/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/ [Accessed 1 Jun. 2017].
- Good, I.J (1965). Speculations Concerning the First Ultraintelligent Machine. Advances in Computers, 6, pp.31-88.
- Harris, S. (2016). Transcript of “Can we build AI without losing control over it?” – Ted Lecture. [online] Ted.com. Available at: https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it/transcript?language=en#t-652200 [Accessed 1 Jun. 2017].
- Kurzweil, R. (2006). The Singularity is near. 1st ed. London: Duckworth.
- Levy, S. (2015). How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over. [online] Backchannel. Available at: https://backchannel.com/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a [Accessed 1 Jun. 2017].
- Manners, D. (2015). Google makes big claims for D-Wave quantum computer. [online] Electronics Weekly. Available at: https://www.electronicsweekly.com/news/business/google-makes-big-claims-for-d-wave-quantum-computer-2015-12/ [Accessed 1 Jun. 2017].
- Metz, C. (2016). In a Huge Breakthrough, Google’s AI Beats a Top Player at the Game of Go. [online] Wired.com. Available at: https://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-player-at-the-game-of-go/ [Accessed 1 Jun. 2017].
- Sonnenburg, S., Braun, M., Ong, C., Bengio, S., Bottou, L., Holmes, G., LeCun, Y., Mueller, K., Perreira, F., Rasmussen, C., Ratsch, G., Scholkopf, B., Smola, A., Vincent, P., Weston, J. and Williamson, R. (2007). The Need for Open Source Software in Machine Learning. Journal of Machine Learning Research, [online] 8, pp.2443-2466. Available at: https://openresearch-repository.anu.edu.au/bitstream/10440/309/1/Sonnenburg_Need2007.pdf [Accessed 1 Jun. 2017].
- Temperton, J. (2017). Got a spare $15 million? Why not buy your very own D-Wave quantum computer. [online] WIRED UK. Available at: http://www.wired.co.uk/article/d-wave-2000q-quantum-computer [Accessed 1 Jun. 2017].
- Turing, A. (1952). Programmers’ Handbook for the Manchester Electronic Computer Mark II. 2nd ed. [ebook] Computer History – Archive. Available at: http://archive.computerhistory.org/resources/text/Knuth_Don_X4100/PDF_index/k-4-pdf/k-4-u2781-Manchester-Mark-I-second-ed.pdf [Accessed 1 Jun. 2017].
- Vinge, V. (1993). The Coming Technological Singularity: to Survive in the Post-Human Era. VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, [online] 1. Available at: https://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html [Accessed 1 Jun. 2017].
Reblogged this on Coffeehouse conversations..
LikeLike