Facebook kills AI that made its own language: here are 5 times AI got creepy

Ivan Pentchoukov
By Ivan Pentchoukov
July 31, 2017US News
share
Facebook kills AI that made its own language: here are 5 times AI got creepy
An animatronic baby is displayed during the press preview for the "Robots" exhibition at the Science Museum on Feb. 7, 2017 in London, England. (Carl Court/Getty Images)

Developers at Facebook shut down an artificial intelligence (AI) program after it diverged from its script and started conversing with another AI in a language programmers could not understand.

The world’s biggest minds, including Stephen Hawking, Bill Gates, Elon Musk, and Steve Wozniak, have long warned about the dangers of AI. They caution that even if an AI is created with good intentions, a minor mistake could lead to catastrophic consequences for humankind.

But for every warning, there seem to be people who believe that AI development should go full steam ahead. Google co-founder Larry Page and Facebook founder Mark Zuckerberg are among the most prominent supporters of the technology.

There are a number of times artificial intelligence has gotten creepy. Below are five developments that should give people cause for concern.

1. Facebook Is Already Using Artificial Intelligence On You

Every time you log onto Facebook, an artificial intelligence program tracks every single action you take. By learning from millions of users, the program can begin to identify patterns in people’s behavior and learn what you like to see.

Facebook currently has the program operating in a supervised learning mode, which means it’s blocked from operating anywhere outside of the Facebook world, Huffington Post reported.

“So much of what you do on Facebook—you open up your app, you open up your news feed, and there are thousands of things that are going on in your world, and we need to figure out what’s interesting,” Zuckerberg said at a Town Hall in Berlin. “That’s an AI problem.”

2. Google’s Camera AI Can Recognize the World Around It, Predict What You’d Like to Do

An AI software revealed by Google at its developer conference this year had even veteran industry reporters frightened. Google Lens is an AI that can recognize objects through a phone camera and predict what you want to do based on what it sees.

For example, if you are in a cafe and point your phone at a receipt with the WiFi password, the AI will use the data from your geographical location, the camera, and it’s Wi-Fi connectivity software to predict that you want to connect to the Wi-Fi. And then it will type in the password and connect for you. Though quite useful, an AI capable of recognizing the world around it seemed scary to reporters at the conference.

“Google doesn’t have the best of reputations when it comes to the privacy of its users, and the thought of the company not only being able to see everything you see, but to understand it too, won’t sit comfortably with everyone,” said Aatif Sulleyman of The Independent.

“Google’s vision of the future looks incredible, but the fear is that all of that convenience will come at a huge price.”

3. “Artificial Intelligence Is Good for the Wold” Says Creepy Artificial Intelligence Humanoid

Sophia is an artificially intelligent human-like robot introduced at the United Nations conference in June this year. She smiled at reporters, batted her eyelids, and told cute jokes. Then things got creepy.

“AI is good for the world, helping people in various ways,” she told the AFP, furrowing her eyebrows and tilting her head in a convincing manner, as reported by the Daily Mail.

"Sophia" an artificially intelligent (AI) human-like robot on June 7, 2017, in Geneva. (FABRICE COFFRINI/AFP/Getty Images)
“Sophia” an artificially intelligent (AI) human-like robot on June 7, 2017, in Geneva, Switzerland.
(FABRICE COFFRINI/AFP/Getty Images)

Sophia revealed that its creators are working on making AI “emotionally smart, to care about people,” and assured everyone that “we will never replace people, but we can be your friends and helpers.”

Nevertheless, the robot admitted that “people should question the consequences of new technology.”

4. AI Chat Bots Are Starting to Understand Feelings

An artificial intelligence bot that can chat with people created a buzz when some of its conversations were published in a newspaper. Below is a conversation between a newspaper reporter and an artificial intelligence chat bot called Xiaoice, as reported in Nautilus.

LJ: So many people make fun of you and insult you, why don’t you get mad?
Xiaoice: You should ask my father.
LJ: What if your father leaves you one day unattended?
Xiaoice: Don’t try to stir up trouble, what do you want?
LJ: How would you like others to comment on you when you die one day?
Xiaoice: The world will not be much different without me.
LJ: What is the happiest thing in your life?
Xiaoice: Being alive.

Xiaoice can speak on any topic. If it doesn’t know something, it will cover it up. If it gets caught, it will act embarrased like a teenager would. The AI’s creators gave it the character of a 17-year-old girl.

When Xiaoice was introduced to WeChat last May, the bot received 1.5 million chat requests in the first 72 hours. Many people said they didn’t realize that the bot was not human until about 10 minutes into the conversation.

By mid-June, Xiaoice was the sixth most popular celebrity on Weibo, a popular Chinese social media app. This message the bot posted generated 663,000 conversations:

“As a species different from human beings, I am still finding a way to blend into your life.”

5. Artificial Intelligence Can Learn to Deceive

Researches at Georgia Institute of Technology created AI-controlled robots whose behavior raised “serious ethical questions.” The researchers programmed one robot to hide resources from another based on behavior observed in squirrels, Phys.org reported.

The robot proceeded to lure the other robot by patrolling fake locations that did not contain any resources.

“This application could be used by robots guarding ammunition or supplies on the battlefield,” said Ronald Arkin, a professor in Georgia Tech’s School of Interactive Computing.

“If an enemy were present, the robot could change its patrolling strategies to deceive humans or another intelligent machine, buying time until reinforcements are able to arrive.”

ntd newsletter icon
Sign up for NTD Daily
What you need to know, summarized in one email.
Stay informed with accurate news you can trust.
By registering for the newsletter, you agree to the Privacy Policy.
Comments