Tay was a “chatbot” set up by Microsoft on 23 March, a computer-generated personality to simulate the online ramblings of a teenage girl. Implementing a profanity filtering and moderation tool is an effective solution to prevent this exact risk.įor more information on the impact of PR incidents like Microsoft's, read our Customer Communications Case Study.I t took just two tweets for an internet troll going by the name of Ryan Poole to get Tay to become antisemitic. Yet for now it is clear that a blacklist combined with human moderation is still very important to protect your brand and your community.Ĭompanies certainly do not want their brands associated with inappropriate and offensive content. Microsoft outlines what they learned from this experience here.Īrtificial intelligence show a lot promise and as the technology matures it will undoubtedly help us solve some great technical challenges. If you’re familiar with the Hype Cycle you know this is a common cycle we see in technology. The unfortunate outcome exemplifies the gap between the promise of AI and the reality of the current state of the art. This allowed trolls to attack and profanity to slip through. When Microsoft hedged their bets on AI, they failed to implement a concrete set of rules and a human moderator as a safety measure. When users see that the AI is not responding to tweets in an entertaining fashion (with flagrantly inappropriate language) they will stop trying to provoke the AI, realizing their efforts are unsuccessful. Later, you can then revert these changes to your filter once the situation is resolved. If you start seeing a normally acceptable word used with a negative connotation such as “holocaust”, add it to your blacklist immediately so the AI and community understands it’s not appropriate to talk about in a rude or demeaning fashion. Stop inappropriate language fast to prevent abuse. Update Profanity Filter in it’s always the right time for a holocaust joke –TayTweets If Microsoft had a human moderation team in place, chatbot Tay could have been shut down immediately - possibly before any offensive content was ever published - saving the Microsoft brand from this PR disaster. While AI is great for some things, common sense is not one of them. Human resources are expensive and there is a tendency to remove them from the equation altogether. Human moderators along with industry leading filtering software allows you to easily monitor the content coming through and quickly react if something goes wrong. It is paramount to involve human moderators when launching a new technology. Shockingly, Microsoft did not even blacklist the most commonly used swear word. ![]() With this type of social experiment, it would be wise to have a large stock list. A simple filter with a concrete set of rules and a human moderator could have prevented this PR disaster. Within hours, Tay spiraled out of control proving there is still a long way to go with this new technology. It is unclear whether this was an intentional or unintentional move by Microsoft, but Tay once again began tweeting offensive content and was yanked from the internet. Interestingly enough, Tay was reintroduced to the world this morning. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.” ![]() “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. On Friday, Peter Lee, Corporate Vice President of Microsoft Reach apologized by saying: The chatbot was shut down within 24 hours of her introduction to the world after offending the masses. Tay, marketed as “Microsoft’s AI fam from the internet that’s got zero chill,” candidly tweeted racist and sexist remarks confirming she in fact had “zero chill”. ![]() Tweet Microsoft apologizes after artificial intelligence experiment backfired.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |