Permanently Deleted

  • drhead [he/him]
    ·
    3 years ago

    From reading more about it, it appears it might be an A/B test thing, so some people might just be in the control group. I've also seen it claimed that the algorithm is basically:

    spoiler

    if text contains ("[n] year old" where n < 18, "little/young boy/girl", "child", "the dog", "the horse") and (any of a number of sex-related terms, which I won't list) then flag

    which obviously can cause problems if you're using the many other definitions of "fuck", or trying to ride a horse, for example. Contemplating this makes you really appreciate how versatile of a word "fuck" is, and how hard it is for an AI to comprehend such a linguistic enigma.

    So, the filter might still be incredibly crudely designed in ways that they should have known would cause problems... and also apparently goes a bit beyond the stated scope, which is kind of important to disclose if you want people to properly identify bugs.