vendredi 25 mars 2016

Microsoft says sorry for AI bot Tay's 'offensive and hurtful tweets'

sorry_sad_face

Microsoft's first tentative steps into the world of artificial intelligence outside of China did not go well. Less than 24 hours after being unleashed on Twitter, the AI chatbot Tay was pulled offline after people quickly learned that it was possible to train the bot to post racist, sexist, and otherwise offensive material. Great fun was had by all!

All except Microsoft, that is. The company was not only forced to pull the plug on Tay, but today was compelled to issue an apology for "unintended offensive" caused. Twitter users treated Tay as some people would treat an infant -- taking great pleasure in teaching it swearwords and other inappropriate things to say. Maybe it was when Tay was talked into becoming a Trump supporter, but Microsoft is now seeking to distance itself from tweets sent out by the bot that "conflict with our principles and values".

The chatbot ended up tweeting messages which Microsoft says "do not represent who we are or what we stand for, nor how we designed Tay". As well as issuing an apology, Microsoft also says that it has learned a great deal from the experience and will use this knowledge to build a better Tay. It's not clear when she'll make a reappearance, but it will be "only when we are confident we can better anticipate malicious intent that conflicts with our principles and values".

Microsoft was not stupid enough to think that people wouldn’t try to manipulate Tay and turn her to the dark side. The company conducted tests and implemented safeguards that were supposed to protect against what ended up happening. But even with a dedicated team of testers, the only way to really put Tay throw her paces was with a wider audience... and Microsoft had massively underestimated the lengths people would go to to pervert the bot.

Writing on the Microsoft blog, Corporate Vice President of Microsoft Research Peter Lee said:

As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It’s through increased interaction where we expected to learn more and for the AI to get better and better.

The logical place for us to engage with a massive group of users was Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the US. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

Photo credit: Pavel Ignatov / Shutterstock



Aucun commentaire:

Enregistrer un commentaire