Interesting. How did an open-source project secure $1 billion in funding? Isn’t that the kind of thing that happens right before a project forks and goes closed?
Edit :
However, the open-source nature of Stability AI’s software means it’s also easy for users to create potentially harmful images — from nonconsensual nudes to propaganda and misinformation. Other developers like OpenAI have taken a much more cautious approach to this technology, incorporating filters and monitoring how individuals are using its product. Stability AI’s ideology, by comparison, is much more libertarian.
“Ultimately, it’s peoples’ responsibility as to whether they are ethical, moral, and legal in how they operate this technology,” the company’s founder, Emad Mostaque, told The Verge in September. “The bad stuff that people create with it [...] I think it will be a very, very small percentage of the total use.”
Looks like they’re betting that if they claim they have no ethical obligations, then that will potentially lessen risk of being held liable for when their technology is inevitably used to produce illegal content.
Interesting. How did an open-source project secure $1 billion in funding? Isn’t that the kind of thing that happens right before a project forks and goes closed?
Edit :
Looks like they’re betting that if they claim they have no ethical obligations, then that will potentially lessen risk of being held liable for when their technology is inevitably used to produce illegal content.