A new wave of AI is poised to transform the technologies we use everyday. Trust must be at the core of how we develop and deploy AI, everyday, all the time. It is not an optional ‘add-on’. Mozilla has long championed a world where AI is more trustworthy, investing in startups, advocating for laws, and… Continue reading About Us
What in the world would an "uncensored" model even imply? And give me a break, private platforms choosing to not platform something/someone isn't "censorship", you don't have a right to another's platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.
Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don't want it to be censored. It's gathering this from public data they don't own after all. I agree with Mozilla's principles, but also LLMs are tools and should be treated as such.
Do gun manufacturers get in trouble when someone shoots somebody?
Do car manufacturers get in trouble when someone runs somebody over?
Do search engines get in trouble if they accidentally link to harmful sites?
What about social media sites getting in trouble for users uploading illegal content?
Mozilla doesn't need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I'm not asking them to host this themselves, which is an important distinction I should have made.
Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.
What in the world would an "uncensored" model even imply? And give me a break, private platforms choosing to not platform something/someone isn't "censorship", you don't have a right to another's platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.
Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don't want it to be censored. It's gathering this from public data they don't own after all. I agree with Mozilla's principles, but also LLMs are tools and should be treated as such.
If you ask how to build a bomb and it tells you, wouldn't Mozilla get in trouble?
Do gun manufacturers get in trouble when someone shoots somebody?
Do car manufacturers get in trouble when someone runs somebody over?
Do search engines get in trouble if they accidentally link to harmful sites?
What about social media sites getting in trouble for users uploading illegal content?
Mozilla doesn't need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I'm not asking them to host this themselves, which is an important distinction I should have made.
Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.