After realizing — by the service own admission — that the AI Assist is able to provide answers about topics covered across the whole network, I decided to have some fun asking a question taken straight from Arqade.SE.
- How much damage do Ancient Arrows do? (In Breath of the Wild)
I tested this twice and both times the service first referenced the actual content from the question, posted an answer… and then a second after removed it, replacing it with a warning:
Sorry, I can't answer that. To ask a new question, please start a new chat. Try asking about coding, development, or topics on the Stack Exchange network.
This seems like a form of self-censorship, probably triggered by the fact that the source material contains "sensible" words like "killing" (mobs in a video game -_-'), "bomb" (still in the video game), and so on.
If this is indeed the case I fear we have two issues:
first, the bot censorship logic fails to differentiate between legit usage of "sensible" words and topics in the context of a game compared to, for example, asking how to build a bomb in real life. I wonder, what would happen if I asked "Why did Mami own a book about building bombs in Scene 0 of Puella Magi Madoka Magica"?
second, and far more important, whoever built this features lacks an understanding that attempting to censor the content generated by a LLM after it was already sent to the user isn't exactly the smartest move and it is not going to do much since the user already got a reply they weren't supposed to get.
I advise reading "Case 87: The Concubine's Fog" of the Codeless Code collection.
Said Banzen, “Fog makes an excellent curtain, but a poor wall.”

