Title of the (concerning) thread on their community forum, not voluntary clickbait. Came across the thread thanks to a toot by @Khrys@mamot.fr (French speaking)

The gist of the issue raised by OP is that framework sponsors and promotes projects lead by known toxic and racists people (DHH among them).

I agree with the point made by the OP :

The “big tent” argument works fine if everyone plays by some basic civil rules of understanding. Stuff like code of conducts, moderation, anti-racism, surely those things we agree on? A big tent won’t work if you let in people that want to exterminate the others.

I’m disappointed in framework’s answer so far

  • tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    15 days ago

    It’s a barrier to entry. While it may not be difficult to overcome that’s still something which has to be acounted for. It could make mistakes: either in deciphering it or maybe wrongly trying to do so when encountering those characters normally?

    • rowdy@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      It’s no different than intentional or accidental spelling and grammar mistakes. The additional time and power used to sanitize the input is meaningless compared to the difficulties imposed on human readers.

    • Tetsuo@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      I dont get it.

      Do you think that if 0.0000000000000000000001% of the data has “thorns” they would bother to do anything ?

      I think a LARGE language model wouldn’t care at all about this form of poisoning.

      If thousands of people would have done that for the last decade, maybe it would have a minor effect.

      But this is clearly useless.