We are entering a period of history in which we could see paradigm shift in what it means to be human. We could see the appearance of trans-humanism and robots within one or two generations. The better science fiction writers of the past are being proved correct yearly. Worryingly, so many of their predictions were of dystopias and extinction. There are real dangers looming, humanity faces some of its greatest challenges ever. I argue that, our best course of action is to ensure that every human is equipped with the best technology and engineering can offer him or her. With this arsenal of science and knowledge at the everyman’s fingertips he will be better prepared to deal with the unknown threats, problems and crises of the future.
Ray Kurzweil makes the prediction in his essay “The Intelligent Universe” that within 25 years, we will have “self-replicating nanotechnology entities” (48). This claim is part of his larger argument that our rate of progress in technology is growing at an exponential rate. According to Kurzweil, plotting a range of different progress measures throughout history on a logarithmic scale shows clear exponential growth, and he expects this to continue. His nanotechnology prediction is but one of many far-reaching forecasts for the future. Kurzweil makes no ethical judgment on his claims; instead, he acknowledges their contentious nature but his mathematical approach places a certainty on the trajectory of the humanity.
Kevin Kelly argues in What Technology Wants that technology increases options. Kelly compares human survival to a game and reasons that “in any game, increase your options” (353). If one is uncertain what might lie ahead, then the best thing they can do is to equip themselves for as wide a range of possibilities as is reasonable. Consequently, Kelly states that, “we thus have a moral obligation to increase the best of technology” (351). Kelly maintains that the creation of new technology “enlarges the space we have to construct our lives” and with in this new space people can find their true calling, passion and story (351).
In contrast, after reading Bill Joy’s article, “Why the Future Doesn’t Need Us,” I feel as though humanity’s current course is akin to a train with the brakes cut, careening ever faster down the track without anyone looking where we might be going or how we might stop. Neither Kurzweil nor Joy disagree that humanity’s progress in technology is fairly unstoppable; moreover, Joy bases much of his argument on Kurzweil’s predictions. There are, however, vast differences in Kurzweil’s, Kelly’s and Joy’s attitudes and portrayal of the future, many of which stem from their fundamental disagreement about technology. Kurzweil and Kelly optimistically believe that technology will better mankind and Joy’s pessimistic outlook is that it may and probably will not.
Joy is no Luddite; he describes himself as having “a strong belief in the value of the scientific search for truth” and his fears of technology are well founded (7). Quite terrifyingly, he describes the danger self-replicating nanotechnology presents – with one lab accident, the whole of the biosphere could be reduced to “gray goo” as vicious man made nanomachines replicate, feed and engulf Earth (11). Joy then goes on to say that as technology is ever more widely available, more and more people have the power to unleash this technology on the world. He describes this is as the “empowerment of extreme individuals” (5).
Theodore Adorno in Minima Moralia talks about the brutality of technology: “which driver is not tempted, merely by the power of his engine, to wipe out the vermin of the street, pedestrians, children and cyclists?” (40). With technology comes power and with this power comes terrible consequences if it is underestimated, mishandled or abused. Is it possible for humanity to survive if every sociopath or terrorist has access to this power? In spite of this daunting question, the progression of technology can lead to many things and this, as Kelly states, increases our chances of winning the metaphorical game of humanity. We have to continue progressing and building upon our current situation because if we don’t, we risk not being able to combat an unforeseeable threat in the future. Without technology, we will not have the option to understand, control, rectify, remove or leverage future events; instead, we would be willfully and negligently underequipping future humans.
Joy agrees that technology creates options but also asserts that, “Unfortunately, as with nuclear technology, it is far easier to create destructive uses for nanotechnology than constructive ones” (11). Joy’s arguments and fears rest heavily on this premise. Quite recently, New Scientist reported that a lab had reengineered a 3D printer to create a range of chemicals.1 Although in the near term the main application will be in labs, soon anyone could become a home chemist. While this has the obvious drawbacks of illegal drug production, it would also allow doctors to diagnose and print drugs remotely in hard to reach or dangerous locations. There is a danger of people creating poisons and toxins in their home; yet, disabled or bedridden patients could benefit by printing repeat prescriptions rather than travelling to the chemist. Eye drops that need to be taken throughout the day but also kept refrigerated could be printed fresh every morning to avoid them spoiling. With a few easily implemented safety protocols, the printer could negate most of its destructive applications. Joy’s fundamental pessimism skews his perspective – his opinion that it is easier to destroy than to create is unfounded. For every abuse of a technology, there are a thousand ways it can be utilized for virtuous goals. I would go as far as to say that it is much easier to find useful applications for technology than it is to bend it to harmful functions.
Although I believe, like Kelly, it is a moral obligation for humanity to continue creating technology as fast as possible, this does not nullify Joy’s argument that a tragic accident or oversight could wipe out humanity. An accident of gross magnitude is likely to occur at some point in humanity’s history. I cite the Flash Crash of 2010 when the Dow Jones dropped almost 1000 points for no apparent reason or the recent Fukushima nuclear accident – both are examples of technology operating without the right safety nets to prevent catastrophe. This raises the question of how we can balance the risk of accident with our obligation to create new technology.
Vernor Vinge, in his novel Rainbows End, imagines a world in the not too distant future grappling with the same problems. In his fictitious world, one individual with the right know-how and will could end the world faster than anyone could react. A lead character, Alfred, decides the only solution to this problem is to manufacture a method of controlling mankind’s minds to ensure no person decides to destroy everything. Alfred justifies his actions thus: “For the first time in history, the world would be under adult supervision” (11). While this method certainly has potential to be effective, it is grossly immoral – no individual has the right to control on such a non-voluntary and universal scale.
Instead, to my mind, the problem itself is also the solution. For every sociopath, for every careless accident there would be millions of people creating inconceivable, meaningful and important things. As Joy argues, currently only governments have global destructive power and by disseminating technology you empower the “extreme individuals.” Nonetheless, through the same process, you are also empowering everyone else. In a functional society, there are far more rational, hardworking and thoughtful people than there are extremists and lunatics; therefore, the worthy, productive and astonishing science and technology generated by this system would vastly outstrip the malicious or accidental dangers produced.
As in Vinge’s Rainbows End, it is the “public-health hobbyists” or general public that notice Alfred’s first attempt at infecting the world with a mind-controlling virus (2). Without the public’s access to powerful tools, an extreme individual only has to fear people in the seat of power: governors, presidents, police etc. With a condition of global technological empowerment, the extreme individual would have to ensure their misdeeds wouldn’t be detected by millions of hobbyists, experts and authorities around the world. The same is true of an accident; if vast swathes of the population are monitoring the globe for a huge range of possible threats, an accident could be detected and potentially subdued in minutes. In the same way Twitter today responds much faster to breaking news than news companies ever could, the world of the future will be able to crowdsource anti-terror measures, collaboratively quash epidemics and root out sociopaths. The science of tomorrow could benefit from the pooled resources, data and experiments of millions of amateur, professional and student science enthusiasts.
As Kelly argues it is our “moral obligation” to continue creating new technology and empowering humanity further and, if we are to believe Kurzweil, it is inevitable. As Joy predicts, there will be great danger to things such as self- replicating nanomachines. I do not deny that placing this power in billions of people’s hands is a risk, but I consider it negligible when compared to the peril of allowing only the few to have the same power.