Hexbear is a left-wing social network and forum. It was created in 2020 as a response to the perceived moderation bias of other social networks, such as Reddit and Twitter. Hexbear is known for its open and welcoming community, as well as its focus on free speech and political discussion.

The site is organized into a series of "hexes," which are like subreddits. Each hex is dedicated to a specific topic, such as politics, art, or music. Users can create posts and comments in hexes, and they can also vote on posts and comments.

Hexbear is a popular destination for left-wing users who are looking for a space to discuss politics and other issues without fear of censorship. The site has been praised for its open and welcoming community, as well as its focus on free speech.

Here are some additional details about Hexbear:

It is a text-only site, with no images or videos.
It is ad-free.
It is funded by donations from users.
It is open source software.

If you are looking for a left-wing social network with a focus on free speech, then Hexbear is a good option.

  • facow [he/him, any]
    ·
    edit-2
    2 years ago

    Really appears to me that everything after

    Hexbear is a left-wing social network and forum. It was created in 2020

    is just a hallucination (that occasionally lines up with reality). IMO the hallucination problem is way more common than tech booster like to admit, I see it all the time when asking to summarize books for example.

    • GVAGUY3 [he/him]
      hexagon
      ·
      2 years ago

      I've used ChatGPT with programming before and actually found that it is trained on outdated documents. It really isn't as accurate as people claim.

    • Huitzilopochtli [they/them]
      ·
      edit-2
      2 years ago

      It does that because all these systems are is a big statistical model that predicts contextually likely sequences of text. If you think about it that way, it is basically just taking contextual information about this site and linking it to other descriptions of sites with similar context, like other Reddit clones, and other statistical relationships between issues with moderation and the term free speech. That's why it is even better at making up information that sounds true than it is at producing accurate information, even when it is right, it is only right because the statistical guesses that make up the text were correct. When you feed enough data in it gets more accurate, but it also gets better at bullshittting in equal measure because it doesn't "know" the difference. You can try to train it by flagging answers as correct or incorrect, but that is a case-by-case solution to a general problem that is really big and fundamental.