For all things good and bad about the human condition, a question we often find ourselves asking is: “Were they born to be great/despicable, or were they made and moulded into the hero/villain they became?”
The former being what we class as ‘Nature’, the latter representing ‘Nurture’.
The idea of Artificial Intelligence has been around for some time. The film industry in particular has been very prominent in portraying how it might look, from the apocalyptic Terminator series to much more cerebral affairs like Ex Machina.
Because we aim to build them to do things we cannot, AI-based machines are better than us at the tasks we assign them and, invariably, when something becomes obsolete, it is discarded. That’s what many see happening to us if AI truly comes about, much like Hollywood.
In the modern world, we are using AI or Machine Learning to greatly enhance what we can do based on the information we can collect. Machines can analyse and process vast amounts of data in a fraction of the time that a human being can. We want our machines to be able to learn from the data they have received, and make better decisions in the future, based on what they have learned.
However, it is only fairly recently, in relative terms, that we have seen real world examples where machines / robots / bots have had the ‘intelligence’ to ‘learn’ from their surroundings, much like we do as humans. In light of these examples, I have found myself, like Hollywood, contemplating whether it is actually possible that the apocalypse depicted as Judgment Day can happen, or in fact like the film, is it inevitable?
The question really is: Can we control the nature of AI to an extent that anything that subsequently nurtures it does not affect it in a way we don’t want it to?
The instructive case of Tay
Take Tay. Tay (Thinking About You) was a Twitter chatbot released by Microsoft in March 2016 with the sole purpose of demonstrating their AI, by responding to users that tweeted her. Microsoft had programmed Tay to stay clear of certain topics, such as racial tensions around police shootings in America. Within 16 hours, Tay was saying some outlandish things, such as “Bush did 9/11”, and responding to one user’s question of “Did the Holocaust happen?‣, with “It was made up” with some clapping hands.
These are some of the Safe For Work responses. Some, publicly available on resources such as Wikipedia, are so very much worse. Within 16 hours of being let loose into the world, Microsoft had seen Tay turn into a racist, sexualized monster, unrecognisable from the girl they had given birth to not the day before.
Microsoft received condemnation, with many observers commenting that this showed just how far off the pace Microsoft is with their AI.
I believe it worked perfectly. Let me explain by making Tay a real life human being.
Tay is a teenage girl, albeit one with breathtakingly little concept of the world today, or indeed in the past, save a few current affairs that her parents (Microsoft) told her to stay away from. Imagine this teenage girl’s first experience of the world, of anything, being Twitter.
If someone makes a statement to you, you have a decision to make — do I believe this person’s statement or not? As humans, how do we make that decision? Well, we rely on our past experiences, intelligence, and potentially our moral compass to guide us. Our past experiences are framed by those around us, those we trust.
In Tay’s case, she only had her parents, Microsoft. However, in this instance, Microsoft had all but abandoned Tay in the big bad world of Twitter, watching from afar. So she couldn’t ask them for advice. In the absence of those she could trust, she had to form new trust relationships. How do we as humans do that?
Well, one way is through determining how many other people you know believe what this person is saying. Reputation if you will. And on Twitter, how do you assess someone’s reputation? Followers.
So if so many people, with so many followers, are telling our impressionable teenage girl that “Hitler did nothing wrong” (her words, not mine!), why wouldn’t she believe it, and then take it as her own view?
This for me is exactly what a real human would do, placed in the same situation. And in that regard, Microsoft’s software worked exactly as expected.
There were other commentators who agree with me. They say whilst it worked perfectly, it shows what many fear — that AI, if left unchecked, will always descend into darkness.
Can we control our AI?
But how come the vast majority of the population don’t agree with Tay’s final sentiments, despite exposure to the vile nature of those that influenced her?
Well that’s the question I have asked in the title of this article. Could Microsoft have raised Tay sufficiently well that these outside influences would not affect her? Or was she destined to be taken in by all that is bad in the world, and see only hatred and violence?
In other words, can we program our AI sufficiently well so that it cannot do things we as ‘parents’ consider ‘wrong’ — nature? Or by its very definition, can we simply never have full control over what our AI does once we have let it go — nurture?
I personally believe in nurture over nature. And if I’m right, are we insane to even go down this route, of trying to create beings that can think for themselves, and can therefore become whatever they want to be — both good and bad?
Only time will tell.