Just chilling and sharing a stream of thought..
So how would a credibility system work and be implemented. What I envision is something similar to the up votes..
You have a credibility score, it starts a 0 neutral. You post something People don’t vote on if they like, the votes are for “good faith”
Good faith is You posted according to rules and started a discussion You argued in good faith and can separate with opposing opinions You clarified a topic for someone If someone has a polar opinion to yours and is being down voted because people don’t understand the system Etc.
It is tied to the user not the post
Good, bad, indifferent…?
Perfect the system
I love the concept, but the ugly reality is that anyone can spin up an instance and pour in an arbitrary number of votes to themselves or anyone else. I think the credibility score would give people a false confidence and honestly do more harm than good unfortunately
I didn't read your post, I just downvoted because I don't like your username. Whatcha' going to do about it?
(Jk, I picked the instance I joined based on the fact that it doesn't do downvotes. I think downvotes drive perverse incentives)
( thanks! do you happen to know other instances that have downvotes disabled? up until know, i just knew of BeeHaw. Choosing between an upvote or engaging on conversation is more enticing when you can't just give a thumbs down and leave the room )
I don't. And because it's an admin setting that can be toggled easily, any websearch you would do to find other people talking about instances that don't downvote should probably be double-checked with the instance itself. Even mine had a brief discussion about changing course and enabling downvotes.
There's a GitHub project to compare instances. I don't think it includes downvote setting, but maybe the other factors will at least help you narrow down. https://github.com/maltfield/awesome-lemmy-instances?tab=readme-ov-file
Are you thinking of something like Stack Overflow’s reputation system? See https://stackoverflow.com/help/whats-reputation for a basic overview. See https://stackoverflow.com/help/privileges for some examples of privileges unlocked by hitting a particular reputation level.
That system is better optimized for reputation than the threaded discussions that we participate in here, but it has its own problems. However, we could at minimum learn from the things that it does right:
- You need site (or community) staff, who are not constrained by reputation limits, to police the system
- Upvoting is disabled until you have at least a little reputation
- Downvoting is disabled until you have a decent amount of reputation and costs you reputation
- Upvotes grant more reputation than downvotes take away
- Voting fraud is a bannable offense and there are methods in place to detect it
- The system is designed to discourage reuse of content
- Not all activities can be upvoted or downvoted. For example, commenting on SO requires a minimum amount of reputation, but unless they’re reported as spam, offensive, fraudulent, etc. (which also requires a minimum reputation), they don’t impact your reputation, even if upvoted.
If you wanted to have upvoted and downvoted discourse, you could also allow people to comment on a given piece of discourse without their comment itself being part of the discourse. For example, someone might just want to say “I’m lost, can someone explain this to me?” “Nice hat,” “Where did you get that?” or something entirely off topic that they thought about in response to a topic.
You could also limit the total amount of reputation a person can bestow upon another person, and maybe increase that limit as their reputation increases. Alternatively or additionally, you could enable high rep users to grant more reputation with their upvotes (either every time or occasionally) or to transfer a portion of their rep to a user who made a comment they really liked. It makes sense that Joe Schmo endorsing me doesn’t mean much, but King Joe’s endorsement is a much bigger deal.
Reputation also makes sense to be topic specific. I could be an expert on software development but be completely misinformed about hedgehogs, but think that I’m an expert. If I have a high reputation from software development discussions, it would be misleading when I start telling someone about hedgehogs diets.
Yet another thing to consider, especially if you’re federating, is server-specific reputations with overlapping topics. Assuming you allow users to say “Don’t show this / any of my content to <other server> at all,” (e.g., if you know something is against the rules over there or is likely to be downvoted, but in your community it’s generally upvoted) there isn’t much reason to not allow a discussion to appear in two or more servers. Then users could accrue reputation on that topic from users of both servers. The staff, and later, high reputation users of one server could handle moderation of topics differently than the moderators of another, by design. This could solve disagreements about moderation style, voting etiquette, etc., by giving users alternatives to choose from.
Just disregard 'votes' entirely. What exactly are you hoping to achieve? Do you want "low-credibility" users highlighted in red so you don't have to bother reading their comments? Have them hidden entirely? Seems like existing tools like blocking and banning already accomplish these goals.
I have an idea. Have every single article or comment posted by a user scanned by an LLM. Prompt the LLM to identify logical fallacies in the post or comment. Post the user logical fallacies counts on a public scoreboard hosted on each federated instance. Now, ban the top 10% scoring users each quarter who have a fallacy ratio surpassing some reasonable good faith objective.
Pros: Everyone is judged by the same impassive standard.
Cons: 1) A fucking LLM has to burn coal for every stupid post we make. 2) LLM prompt injection/hijacking vulnerability.
I think mob rule as a moderation system is bad, and having a few power-users in charge is not the worst answer to that.
In my head: you'd have small web of trusts (I can vouch for you, you can vouch your friend, your friend can vouch for me, I must be somewhat trustworthy), and these webs would have some kind of voting power over flagged comments. Of course, that can be gamed...
Is this for an online community like Lemmy, or more oriented towards fixing the credit institutions?
in any case, a credibility metric would soon turn into a goal to achieve ^(karmafarming says what?)^
A metric ceases to be useful when it becomes a goal.
Thank you all for the discussion! I have read all the comments and enjoyed each response and will continue to do so. I came out with pretty much the same feelings as the rest of you…. In an ideal world…
Once again, thank you and good luck to everyone out there…we got this!
It's just not that good of a metric overall. Not just because it would be easy to fake it, but also because it would inevitably divide into tribes that unconditionally upvote eachother. See: politics in western countries.
You can pile up a ton of reputation and still be an asshole and still get a ton of support from like-minded people.
The best measure of someone's reputation is a quick glance and their post history.