qertsilent.blogg.se

Microsoft chatbot tay hitler was right
Microsoft chatbot tay hitler was right







microsoft chatbot tay hitler was right
  1. Microsoft chatbot tay hitler was right how to#
  2. Microsoft chatbot tay hitler was right software#
  3. Microsoft chatbot tay hitler was right code#

If we look at ones which seem semi-legitimate, like the genocide one, then all we are seeing is typical chatbot evasiveness and generic Eliza-level responses, cherrypicked out of the 100k or so tweets Tay made before they were all deleted. (What? Someone edit a screenshot, especially where no one can check the original, to score political points? You really think someone would do that - just go on the Internet and tell lies?) It's just a repeat-after-me where the initial tweets were edited out.

Microsoft chatbot tay hitler was right how to#

Do you really think that a 2016-era mass market chatbot (sub-char-RNN in power, often relying on heavily engineered template/script databases) really knows how to do Trump memes complete with clap emoji? What dataset has those? Of course not. If I go through 'tay ai' (as I have before), I see much the same thing: I see a lot of clearly edited squirrely images, which remove all of the context, and when they appear to include context, the UI looks wrong (some of these are clearly using the 'Replies' tab on the Tay page, instead of being on the actual convo thread why? to remove the context with the repeat-after-mes or which would show Tay is just spitting out lots of canned generic responses, of course), and like some tweets are being edited out and the remainder spliced together. So why do you trust them on everything else and assume they did a good job researching and factchecking when they were so clearly wrong on that one? Why do you believe the others are not repeat-after-mes? Shouldn't the burden of proof be on anyone who claims a specific tweet can be trusted even if those others are bad? Did any of the coverage you read mention that? No, they did not. I think you are being insufficiently mediate-literate and skeptical and willing to take screenshots at face-value, given that I just demonstrated that a widely-cited example is in fact maliciously and misleadingly edited to remove the context and lie to the viewer about what happened. The real story, of an `echo` gone wrong, is vastly less interesting, and is about as important as typing '8008' into your calculator and showing it to your teacher.

Microsoft chatbot tay hitler was right code#

(Even though if you look at the chatbot code MS released later, it's not obvious at all how exactly Tay would 'learn' in the day or so it had before shutdown.)Īnyway, long story short, the Tay incident is either entirely or mostly bogus in the way people want to use it (as an AI safety parable).

Microsoft chatbot tay hitler was right software#

(This is why lots of people still 'know' Cambridge Analytica swung the election, or they 'know' the Twitter facecropping algorithm was hugely biased, or that 'Amazon's HR software would only hire you if you played lacrosse', or 'this guy was falsely arrested because face recognition picked him' etc.) If you look at the very earliest reporting, they mostly say it was repeat-after-me functionality, hedging a bit (because who can prove every inflammatory Tay statement was a repeat-after-me?), and then that rapidly gets dropped in favor of narratives about Tay 'learning'. It's hard to say given how most of the relevant material has been deleted, and what survives is the usual endless echo chamber of miscitation and simplification and 'everyone knows' which you rapidly become familiar with if you ever try to factcheck anything down to the original sources. As best as I can tell, there may have been a few milquetoast rudenesses, of the usual sort for language models, but the actual quotes everyone cites with detailed statements about the Holocaust or Hitler seem to have all been repeat-after-mes then ripped out of context.









Microsoft chatbot tay hitler was right