Can self-learning AI chatbots be dangerous?

Can self-learning AI chatbots be dangerous?

Recently a report from the Facebook Artificial Intelligence Research lab (FAIR) raised quite a lot of eyebrows. Apparently artificially intelligent (AI) ‘chatbots’ using machine learning algorithms were allowed to communicate with each other in an attempt to converse better with human beings. The social media company’s experiment started off well enough, but then the researchers discovered that the bots were beginning to deviate from the scripted norms. At first they thought that a programming error occurred, but on closer inspection they discovered that the bots were developing a language of their own.

In 2016, an AI Twitter chatbot developed by Microsoft called “Tay” caused quite a lot of  embarrassment for the company when Internet trolls started to teach the bot to respond with racial messages on user questions. The aim of this experiment was also to develop an artificially intelligent entity that would be able to communicate with human beings, learn from the experience and get smarter in the process.

Press ‘4’ to wait for the next available operator…

The potential market for these AI chatbots are huge. Image if you could call your insurance or medical aid company and immediately speak to one of these bots without waiting hours for a human operator. Or navigating through endless recorded messages prompting you to press ‘1’ or ‘2’ to proceed to the next.

Imagine if these bots are able to speak to you in your own language, authenticate your identity with voice recognition and immediately understand the problem that you have. Imagine if these bots could communicate instantly with other bots on the other side of the globe to solve your problem.

This scenario is already becoming a reality, and eventually you would not even know that you are talking to a non-human AI.

Maybe Microsoft was a bit pre-mature in releasing their chatbot technology into the Internet Wild Wild West, but then again, great lessons were learned in the process. In Microsoft’s defense, they did not program the bot to be racists, nor did the bot itself have any concept of what racism means.

Human communication

Any human language (written on spoken) might not be the most efficient way for AI entities to communicate with each other. Take English, for example. There are many words that basically mean the same thing (think vehicle/motor, or petrol/gasoline).

An AI that have to convert these words to bits and bytes to transmit over broadband Internet connections might come to the conclusion that the ones with the least amount of characters are more optimized. So it could tend to favour certain words and/or phrases.

The way that we change words and sentences to indicate tenses might also be strange to an AI. If the sentence “The boy kicks the ball” must be converted to past or future tense, an AI might device a strategy of using the character < for past tense and > for future tense. If this sentence is optimized even further, the AI could simply transmit “Boy kick ball <” or “Boy kick ball >” to indicate that the action happened in the past or the future.

So, this was precisely what the Fabebook bots were beginning to do. Below is a short sample of the new ‘language’ that they developed:

Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to

I’ve told you so!

When the general public learned about the Facebook incident, the first response was to call this a Skynet event (as per the popular Terminator movie-franchise). Indeed, these potential doomsday scenarios where artificial intelligent entities become self-aware and enslave the human race has been a popular theme of many books and movies over the years (2001: A Space Odyssey, The Matrix, I Robot etc.)

But should we be worried?

Isaac Asimov’s Three Laws of Robotics are usually quoted at this point to ensure people that there is nothing to be afraid of. However, when Asimov developed these laws, he was thinking about human-like robots or androids that would share our living space and do all our chores and dirty work. (The 3 laws are quoted at the bottom of this post).

But today the concept of robots and artificial intelligence has changed dramatically. AI entities might exist purely in a digital state without any physical form. These entities might also be decentralized, being distributed across many data centres or compute nodes – making it impossible to destroy.

The concept of ‘doing harm’ to a human being is also very vague. With social media playing such a big part in most people’s lives, cyber-bullying is just as dangerous as physical harm. Most people doesn’t bother to check the source of news events or posts and are happy to simply forward it to their followers. A malicious AI bot could easily destroy a person’s reputation by associating him/her with racists, harmful or pornographic posts and websites.

Many people have lost their jobs already by something they have tweeted.


Companies like your Googles, Microsofts, IBMs and Amazons (which have the funds to invest in machine learning, neural networks and other artificial intelligence technologies) are ultimately doing it to make and/or save money. I am not saying that they are not thinking about the future consequence of the software that they are developing. (The fact that the deviations of the Facebook and Microsoft bots could be identified and stopped, shows that we are still in control).

My concern is more that there is no common strategy between the different role-players with everyone essentially doing their own thing. And then there are many rogue nations and companies in the world that do not follow the rules in any case.

Chatbots and artificial intelligence are not going away anytime soon. AI will have a huge impact on our lives in the future – for the good. Lives will be saved, sicknesses healed and processes simplified because machines across the world are constantly analysing problems, learning from it and coming up with clever new solutions. But we always need to be wary of the fact that we could be creating systems that might have unexpected results as intended.


Asimov’s Three Laws of Robotics are as follows:
• A robot may not injure a human being or, through inaction, allow a human being to come to harm
• A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
• A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s