The Value of Future Thinking

“Every action by any executive which does not follow the exact directions of the Machine he is working with becomes part of the data for the next problem. The Machine, therefore, knows that the executive has a certain tendency to disobey. He can incorporate that tendency into that data,—even quantitatively, that is, judging exactly how much and in what direction disobedience would occur. Its next answers would be just sufficiently biased so that after the executive concerned disobeyed, he would have automatically corrected those answers to optimal directions. The Machine knows, Stephen!”

– Isaac Asimov, I, Robot (1950)

For any reader of Asimov’s I, Robot, perhaps the most chilling moment comes in the final story, when Dr. Calvin proclaims humanity’s loss of autonomy to the machines. Upon reading that statement, I am certain you felt a sense of dread at such a state, where human free will is essentially stripped to being manipulated by machines. Yet there is a positive perception of it as well. The very same things that potentially make humanity great are often its greatest downfalls as well, a notion that becomes evident through any work of science fiction. As such, it has earned a perhaps unfair value as a phrase denoting the impossible. It encapsulates the realms of impossibility, of spacecraft and laser swords, giant intergalactic wars and empires with a billion individuals. It’s a world dystopian world of often utopian ideals, presenting the fears of technology overwhelming the individual identity as one amongst countless others. It is a contradiction striking at the root of human insecurity and desires.

How so? Science fiction presents us with our wildest dreams (I can’t imagine a single person who wouldn’t want a lightsaber) existing in an often troubled or violent world. They are not functioning societies, rather shallow utopian states masking a deeper and inescapable dystopia in the vein of the proletariat in 1984 or any of the classes in Brave New World. Far from fiction, science fiction offers us unique insight into the field of futurism, a contemplation of an achievable future we might think we want but would really suffer within.

I present you with three writers who do just this, Asimov, Adams and Banks. Their writing has inspired and pervaded current research to the degree that ideas presented in the stories have become ubiquitous with scientific thought, think Asimov’s three laws or Banks’ neural lace. This is not out of homage to the writing, rather out of practicality given the validity that such theories have to current developments in technology. Far from fictional impossibility, the dreams of science fiction are quickly becoming reality. Given the potentially drastic social effects such disruptive technologies could have, science fiction offers a useful thought point for future application. Their writing has arguably given rise to the field of futurism, the topic of this article.

Futurism is not the artistic movement of the early 19th century (in this case), instead, it’s the study of the future. Stated in such a way it seems ridiculous. There is little purpose of considering the future given its indefinite nature. The most certain thing about the future is precisely its uncertainty. Despite that, futurism is an essential field. Technologies are described as disruptive for a reason, they have wide socio-economic and cultural impacts beyond what their contemporaries might have imagined. The prime example is the internet, how many were sceptics at its release? This has created insurmountable social divides. The term ‘technological native’ has recently being created to describe such a divide, between those at ease with technology and those who aren’t. There is an undeniable inequality created by it however much opportunity it offers. It is essential to promote futurism as a worthy area of study now rather than later when the same effects a futurist might have considered are already being felt.

What do I mean? One industry poised for disruption is the line trucking industry. Companies like Uber and Mercedes are in the process of developing advanced autonomous cargo trucks, perfectly capable of driving on highways (the roads on which autonomous cars best function) with the same if not higher efficiency that truckers currently demonstrate. Anyone who saw Logan might recall such vehicles on the highways within the film. Far from fantasy, they might appear on the market as soon as 2025 according to Mercedes’ estimates. This would remove the average $46,000 annual salary for 3.5 million truckers in the states alone. The necessity would be to relocate the efficiency gains earned by such a replacement into employing or educating those displaced. Without proper foresight, such measures become too little too late as too often is the case in politics.

This itself is a problem, politics, or the lack thereof when it comes to disruptive technologies. The failure to educate the public on such technologies has meant that it is not a wider concern when it should be, as such, politicians either do not know or ignore the power these industries could have on the future. Think Google and Facebook’s fake news scandals, Amazon’s potential to becoming the biggest monopoly we’ve ever seen or most of the complaints associated to Uber. These companies operate largely independently. I think that’s good, it allows for them to improve efficiency in what are beneficial services, but there comes a point at which they can, inadvertently or not, take advantage of markets unaware about their power. What occurs next? The inevitable outcry. The labelling of them as destructive powers. Demands to shut them down or restrict their power would come at vast social cost given they are overwhelmingly beneficial goods. It happened with IVF. Infertile couples were prevented from having children for a time because others in society proclaimed IVF an outrage. We fear what we do not understand. Technology promises to increasingly evade understanding. The question is not to restrain it, but rather encourage a wider appreciation of it that it might be effectively integrated.

Recent developments are not promising. President Trump has yet to appoint a science advisor. Across the world, news is more concerned with the sway of nationalism than it is with developing technology. Experts and older generations appear determined to label millennials as entitled ADD cases dependent on technology. Such views are understandable, yet overwhelmingly detrimental. Developed nations accuse immigrants of taking their jobs when the reality is that it is machinery and autonomous systems that have done so. Workers are competing against highly efficient software without even knowing. What is curious about our current technological revolution is precisely the lack of acknowledgement that it is going on. The Luddites had similar issues, but at least they knew what they were protesting. I find it curious as a millennial that I am more aware of the effects of technology, most of all smartphones, in changing society than many of my elders who have actually seen the change. For me, a smartphone gives me greater access to the world, instead, my parents suggest I am cutting myself off from the world through using them. The divide is obvious, the solution less so.

Elon Musk is chief amongst those attempting to offer a solution. His latest project draws directly from science fiction in his desire to create a “neural lace”. This is a term first introduced by Iain M. Banks, a form of uplink between the organic human brain and an intelligent computer. The purpose is not to elevate some humans above others through enhanced interaction with intelligent machines, but instead, ensure greater harmony between artificial and human intelligence least one outgrow the other. Given humanity’s track record, any other forms of intelligence seem bound to win. It is exactly such forward think that should be encouraged. Musk is considering a future where machine intelligence lives amongst us, as such, he is taking steps to ensure they can be efficiently incorporated without excluding the majority of humanity. Similar approaches might minimise theshort-termm damage caused by other potentially disruptive technologies, which promise to appear more regularly and with greater effect.

These are not definite returns technology. The idea of implanting electronics into individual brains is not only practically implausible but also presents a cultural disgust. It is potentially a ridiculous notion doomed to fail. Or is it? Cinematographer Rob Spence replaced his right eye with a miniature and self-contained camera. Epicenter, a Swedish company, inserted microchips into employees as a substitute for passcodes and keycards. Some artists are experimenting with implants as a form of integration and inspiration. It is a notion far from mainstream, but it is out there. Amber Case has gone as far as suggesting that phone usage as a sort of external brain demonstrates the movement towards the cyborg being. The developments being made in prosthetics could also lead to the point where individuals choose to have bionic limbs. Instead of having human legs prone to exhaustion, why not make use of a range of specialised prosthetics instead? We are far from this point, but we are also close enough that it is not implausible. Far from being a dreamer, Musk’s vision with neural lace development appears to be solving a problem we are bound to have by encouraging organic/artificial interface. It is a proactive approach that we desperately require as large Silicon Valley giants spend increasing amounts of funds into ‘moonshot’ projects that have extensive disruptive potential. Spending on AI alone has risen exponentially in the past few years, from mere millions in the early 2000’s to $58bn in 2014. Now is the time to look to the future, and how to deal with it before it overwhelms us.

One of the leading proponents of futurism has become the author Noah Harari, if not directly, then through his latest work ‘Homo Deus: A Brief History of Tomorrow’. Through it, he considers likely focus points of human development over the 21st century. The point he makes about the uncertainty of the study is balanced well with his assertion of futurism as a theory based approach rather than one aimed at constructing certainties. Yet, it is not necessary to discard certainties. One of the developing markets since the rise of Silicon Valley has been the data industry, both in storage and management. There are increasing amounts of data available in the world, leading to increasingly accurate predictions about future developments based on trend and investment models. Futurism as a field could stand to gain a lot more traction through the use of data mining to create, if not precise, then at least broad overviews of the impacts of disruptive technologies. With an engaged public and an effective government management of such technologies, their benefits could be harnessed as tools of social wealthfare. As it stands, technology is a divider rather than the equalizer it should be. Most estimates demonstrate developments benefitting its creators to a much greater extent than its consumers. If Google succeeds in creating a general AI through Deepmind, they could get rid of most its employees, reaping the benefits of its business without returns to society through wages. The risk is real, the reaction non-existent.

I suppose the final message is not to fear, or worse reject, what we cannot understand. Instead, embrace it, with time we will undoubtedly come to love it, as has happened so often with past technological breakthroughs. This is the time to be open, not closed. The future promises to be beyond science fiction only if we allow it. Without an effective understanding of the possibilities offered by disruptive technologies, society will undoubtedly be overcome, leading to either widespread inequality or harmful resistance to the implementation of new technologies. I doubt as to our ability to adapt and engage with new technologies, so I fear the worst. I hope for the best.

Leave a comment