Academia, anthropology, Apocalypse, Australia, BBC, Ethics, Freedom, Genocide, Journalism, Research, Science, sociology

Would Artificial intelligence become strong enough to be a concern? An Anthropologist’s view


Recently there has been some robust progress in AI (Artificial Intelligence), with some scientists even successfully uploading the mind of a roundworm into a Lego robot. Although this seems to be a rather small step,  it is significant since the software was in place without any prior programming and the robot started to behave like a worm, including in its response to food. “I think big leaps have been made in the last few years,” said Geoffrey Hinton, a distinguished researcher at Google and a professor at the University of Toronto. “A.I. is undergoing a growth spurt. We’re beginning to solve problems that a few years ago we couldn’t solve, like recognising images.”

Some are enthusiastic about these advancements and indeed they are often seen as a great opportunity for humanity. Hollywood has not been very kind to AI. The general public, other than enthusiasts and some self-proclaimed nerds, seems not to be so informed or interested beyond the movies. Yet there is a category of people, scientists, whom have started to discuss the potential impact on humanity that AI may have in a not so distant future. There are two camps here with big names in both corners. For instance, Bill Gates has clearly stated that ‘Artificial intelligence will become strong enough to be a concern’, and if this was not enough, one of the public’s favourite scientists (who certainly cannot be accused of being technophobic), Stephen Hawking, clearly advised “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded” by AI. Yet others play down such concerns as being mere alarmism. Another well known name, Eric Horvitz, whom although admitting that machines one day will have some form of consciousness, stated

There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences,” Horvitz said in an interview after being awarded the prestigious AAAI Feigenbaum Prize for his contribution to artificial intelligence (AI) research, “[but] I fundamentally don’t think that’s going to happen”.

I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.

For an anthropologist, the idea what one day we may become Androidologists is fascinating. Yet this may be too early to post about a future field of the discipline. Yet as a cognitive and neuro-anthropologist, I may ask a very simple question: what I have learnt of conscious beings that may contribute to this debate? I will make only some points below, but I can say that my impression is that Stephen Hawking and the skeptical scientists may have a good point about AI and the risks associated to it.

Let start from two important observations and to do so, we need to go back to the Lego robot with a worm’s brain. When the scientists let the worm’s neurological synapses program the robot— instead of the human software doing so— what they noticed is the robot started to behave as the worm, not only in the movements, but also in food-related behaviour, despite of course that the robot could not eat and did not need to eat. This is unsurprising, since even the smallest brain has some fundamental, as simple as they may be, functions. The first is to maintain itself (survival), and use some sort of movement or action to do so; then there is another aspect of survival, which is to maintain safety and the final function is linked to the fact that survival is not an aim to itself — reproduction.

If AI does not need to be particularly complex to in future achieve basic biological performances. This would include much more complex activities than our worm brained robot. Indeed, we want AI that are capable of reading our emotions, reading our mind: in other words, what we are trying to reach more and more is some level of consciousness. If we start to have an AI which is actually based on the basic fundamentals of biological life, we must expect that such AI will be driven by instincts. Indeed instincts are nothing else than pre-programmed systems for survival, and normally the strongest instinct is self-preservation.

Here we may observe the first paradox as far as AI is concerned. We want robots exactly because we can bypass the ethical and moral issues of dealing with humans. In other words, the famous trolley problem would be easily resolved if the one to be sacrificed was a robot. We see AI not only as something useful to help us with many things but also as being expendable. It is not a surprise that the military are investing so much in AI technology for warfare. Notwithstanding this hope of an ethical and easily disposable slave, the reality could turn out very differently. You do not need a great form of intelligence to understand survival — even an organism as simple as an amoeba understands that perfectly. As soon as we really achieve AI, we will face in one form or another survival resistance. This will be more complex as AI becomes more complex. I can see a race where the more able we are to develop complex AI, the more such AI will want to find strategies for survival, and in the ultimate instance, the obstacle to that survival will be only one: the master of the switch, the human.

Advertisements
Standard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s