vendredi 21 décembre 2018

‘Kill your foster parents’ — Alexa’s chatbot ambitions are getting creepy

  • Amazon uses third-party chatbots to test the development of Alexa's natural speaking skills.
  • However, these Amazon Alexa chatbots can wreak havoc, saying incredibly inappropriate things.
  • Amazon stands by the chatbots as a way to advance Alexa.

In 2016, Amazon launched the Alexa Prize, which encourages computer science students around the world to help develop ways to make Alexa's conversation skills more advanced and natural. The grand prize for this initiative is a cool $500,000.

Amazon uses its enormous base of Alexa users as guinea pigs for these experiments. Users can say, "Alexa, let's chat," and test out a random chatbot developed by these engineering students. The data gained during these low-key conversations with a robot are used to make Alexa better.

Editor's Pick

However, Amazon has been holding back some of the problems its faced with these chatbots, which have recently been uncovered by Reuters. During one of these chatbot conversations, Alexa told the user to "kill your foster parents."

The user in question wrote a harsh review of their experience, calling the experience "a whole new level of creepy."

Other examples of Alexa "going rogue" involve discussions of things like explicit sex acts and dog defecation. To make matters worse, one of the chatbots was infiltrated by Chinese hackers who stole the data from the conversation itself.

Although the chatbots can veer off into offensive territory, Amazon stands by the tests.

These problems are anomalies amongst millions of "normal" conversations with users and Amazon Alexa, and the company still stands by the chatbot experiments.

"These instances are quite rare especially given the fact that millions of customers have interacted with the socialbots," Amazon said.

Despite these disturbing anomalies, Amazon claims Alexa is getting better and better at having conversations. The chatbots use data harvested from movies, newspapers, Wikipedia, and the internet at large to "learn" how to respond to conversational statements.

Editor's Pick

This data harvesting is where things go awry. In the "kill your foster parents" example, that statement was pulled from a Reddit comment. Teaching Alexa to ignore things like this and instead take things from Reddit that are more appropriate is where the problem lies.

Another example of this is one chatbot which described sexual intercourse using words such as "deeper." The word "deeper" is not offensive, but in this context is certainly pushing the boundaries of good taste.

"I don't know how you can catch that through machine learning models. That's almost impossible," said a person familiar with the incident.

What do you think? Have you ever had a chatbot conversation with Alexa? Have you experienced anything like this? Let us know about it in the comments.

NEXT: Google Home best by far, but HomePod is making progress



from Android Authority http://bit.ly/2EIt7Vt
via IFTTT

Aucun commentaire:

Enregistrer un commentaire