• PROMIS_ring [he/him]
    ·
    edit-2
    2 years ago

    there was some drama in the stability diffusion world last week after the propietary model checkpoint used by NovelAI (the 'free speech' oriented successor to Ai Dungeon) was leaked, then Stability AI basically couped the ostensibly independent stable diffusion subreddit and discord server and banned the dev of their most popular webui for integrating some code from the leak, presumably to appear more business friendly and better invite this kind of valuation

    Then it was revealed that NAI had themselves lifted the code from another older project, so i think Stability back pedalled but idk i didn't follow it too closely

    Also this webui used easily guessable public urls so people were systematically searching for and using other's instances to generate images, and apparently could conceivably take over your browser or run arbitary code or something idk.

    • CommunistBarbie [she/her]
      ·
      edit-2
      2 years ago

      the ‘free speech’ oriented successor to Ai Dungeon

      I’m going to guess “free speech” is a euphemism for platforming racism, queerphobia, and pedophilia.

      • PROMIS_ring [he/him]
        ·
        2 years ago

        my understanding is that Ai dungeon started to implement filters to deter pedos and their users revolted

  • CommunistBarbie [she/her]
    ·
    edit-2
    2 years ago

    Interesting. How did an open-source project secure $1 billion in funding? Isn’t that the kind of thing that happens right before a project forks and goes closed?

    Edit :

    However, the open-source nature of Stability AI’s software means it’s also easy for users to create potentially harmful images — from nonconsensual nudes to propaganda and misinformation. Other developers like OpenAI have taken a much more cautious approach to this technology, incorporating filters and monitoring how individuals are using its product. Stability AI’s ideology, by comparison, is much more libertarian.

    “Ultimately, it’s peoples’ responsibility as to whether they are ethical, moral, and legal in how they operate this technology,” the company’s founder, Emad Mostaque, told The Verge in September. “The bad stuff that people create with it [...] I think it will be a very, very small percentage of the total use.”

    Looks like they’re betting that if they claim they have no ethical obligations, then that will potentially lessen risk of being held liable for when their technology is inevitably used to produce illegal content.