I'd push back a bit. I'd say the internet inherently facilitates certain vices, whether it's designed humanely or not. It's not just the ad driven model which makes misinformation so easily spread, it is to some degree just amplifying human psychology and behavior. The internet has created a platform for every brand of crazy imaginable, and I know that's prima facia ironic being posted on a lifeboat community of radical left political actors that got booted for being too spicey about what is to be done - but irony aside, the point still seems valid to me.
People aren't ignorant and hateful because they don't have access to the right information, it's abundantly clear that they do in fact have access same as you and me. Rather, they are inundated with misinformation and hateful propaganda that's designed, intentionally or not, to make use of Epistemic blindspots in our cognition and subtly indoctrinate. The decentralized nature of the internet makes that kind of propaganda MUCH easier to spread than facts and counter-narratives. It's not insurmountable, but it's a serious imbalance, and I don't see a way to combat that with a decentralized model. Centralize enough, however, and it risks destroying what makes the internet what it is to begin with. That's not even getting into the problems associated with accountability/oversight.
I don't want to give up too much anonymity here, but my research pertains to moral psychology, human reasoning, and belief systems. This is one of the papers I'm basing my opinion on. Ideologies that are well and truly out of this world irrational and baseless are not in fact fragile. Rather, they are extremely robust with complicated mechanisms for keeping believers and acquiring new converts. One can analogize it to a virus, even. Once the belief system takes root it converts the resources of the host against itself, and diverts it towards further spreading the virus. The internet seems incredibly conducive to this kind of spread, and much less conducive towards the deliberative and reflective thought necessary to combat such belief systems.
I genuinely don't know. I think getting rid of the profit motive as the engine the internet runs on is a good idea, but that's a huge undertaking too, and I don't have the tech background to go about envisioning such a thing much less what might come next. No idea how we'd go about determining how to allocate server space. The anonymity of the current internet is one of its most treasured features, but I don't know if that's really healthy for human flourishing in the long-run. I'm being a bad academic right now and not citing sources - but all kinds of studies show how we're more vicious behind a veil of anonymity. Even just interacting on the computer encourages us to be meaner than we would be in person. I don't know how to solve that issue whatsoever, it can probably only be mitigated, question is can it be mitigated enough?
I do know that there are ways we could design technology to be more humane. By that, I mean less gamified and addicting. Right now success is measured in how long an app or webpage can hold your attention and thus serve you up more ads or extract money through micro-transactions. Developers abuse every trick they can in human psychology to make that happen. I actually will cite sources here because these are publicly available and easy enough to follow up on. The Social Dilemma was a movie released last year discussing the dangers and downsides of this race to the bottom, but I think the book "Your Happiness Was Hacked" does a more robust job and is a pretty easy read. The good news is we've identified the exploits already, thus in theory all we have to do is design technology to avoid those pitfalls instead of steering us right into them.
I guess the next step would be identifying the kind of traits we DO want to promote, and building our technology in a way that positively encourages those traits. If we wanted to hamfist it into a hegalian dialectic it'd go something like this.
Thesis: we should use the Internet to build complex psychological models that manipulate our behavior to C O N S U M E
Anti-thesis: actually no that's not a very good thing, we should design our technology in a way that explicitly avoids those kinds of manipulations. Yikes.
Synthesis: we should use the internet to build complex psychological models that manipulate our behavior to...??? Do good things??? I guess? Yeah nebulously defined good things, that's what I'm going with. That kind of use would be good actually.
But that almost feels like a Cass Sunstein type of "nudge" situation and I'm a little grossed out by it. Who determines what's ultimately good? How do you avoid the problem of expertise, where people who know less can't effectively evaluate the decisions of people who know more in this regard? How do we keep that accountable? There's not an ethical way of testing it, this is people's lives we're playing with (part of what makes the ad driven model so bad to begin with!).
But at this point I'm just rambling. Thanks for inviting me to think about it more, sorry it doesn't feel like I've moved things along at all.
deleted by creator
I'd push back a bit. I'd say the internet inherently facilitates certain vices, whether it's designed humanely or not. It's not just the ad driven model which makes misinformation so easily spread, it is to some degree just amplifying human psychology and behavior. The internet has created a platform for every brand of crazy imaginable, and I know that's prima facia ironic being posted on a lifeboat community of radical left political actors that got booted for being too spicey about what is to be done - but irony aside, the point still seems valid to me.
People aren't ignorant and hateful because they don't have access to the right information, it's abundantly clear that they do in fact have access same as you and me. Rather, they are inundated with misinformation and hateful propaganda that's designed, intentionally or not, to make use of Epistemic blindspots in our cognition and subtly indoctrinate. The decentralized nature of the internet makes that kind of propaganda MUCH easier to spread than facts and counter-narratives. It's not insurmountable, but it's a serious imbalance, and I don't see a way to combat that with a decentralized model. Centralize enough, however, and it risks destroying what makes the internet what it is to begin with. That's not even getting into the problems associated with accountability/oversight.
I don't want to give up too much anonymity here, but my research pertains to moral psychology, human reasoning, and belief systems. This is one of the papers I'm basing my opinion on. Ideologies that are well and truly out of this world irrational and baseless are not in fact fragile. Rather, they are extremely robust with complicated mechanisms for keeping believers and acquiring new converts. One can analogize it to a virus, even. Once the belief system takes root it converts the resources of the host against itself, and diverts it towards further spreading the virus. The internet seems incredibly conducive to this kind of spread, and much less conducive towards the deliberative and reflective thought necessary to combat such belief systems.
deleted by creator
I genuinely don't know. I think getting rid of the profit motive as the engine the internet runs on is a good idea, but that's a huge undertaking too, and I don't have the tech background to go about envisioning such a thing much less what might come next. No idea how we'd go about determining how to allocate server space. The anonymity of the current internet is one of its most treasured features, but I don't know if that's really healthy for human flourishing in the long-run. I'm being a bad academic right now and not citing sources - but all kinds of studies show how we're more vicious behind a veil of anonymity. Even just interacting on the computer encourages us to be meaner than we would be in person. I don't know how to solve that issue whatsoever, it can probably only be mitigated, question is can it be mitigated enough?
I do know that there are ways we could design technology to be more humane. By that, I mean less gamified and addicting. Right now success is measured in how long an app or webpage can hold your attention and thus serve you up more ads or extract money through micro-transactions. Developers abuse every trick they can in human psychology to make that happen. I actually will cite sources here because these are publicly available and easy enough to follow up on. The Social Dilemma was a movie released last year discussing the dangers and downsides of this race to the bottom, but I think the book "Your Happiness Was Hacked" does a more robust job and is a pretty easy read. The good news is we've identified the exploits already, thus in theory all we have to do is design technology to avoid those pitfalls instead of steering us right into them.
I guess the next step would be identifying the kind of traits we DO want to promote, and building our technology in a way that positively encourages those traits. If we wanted to hamfist it into a hegalian dialectic it'd go something like this.
Thesis: we should use the Internet to build complex psychological models that manipulate our behavior to C O N S U M E
Anti-thesis: actually no that's not a very good thing, we should design our technology in a way that explicitly avoids those kinds of manipulations. Yikes. Synthesis: we should use the internet to build complex psychological models that manipulate our behavior to...??? Do good things??? I guess? Yeah nebulously defined good things, that's what I'm going with. That kind of use would be good actually.
But that almost feels like a Cass Sunstein type of "nudge" situation and I'm a little grossed out by it. Who determines what's ultimately good? How do you avoid the problem of expertise, where people who know less can't effectively evaluate the decisions of people who know more in this regard? How do we keep that accountable? There's not an ethical way of testing it, this is people's lives we're playing with (part of what makes the ad driven model so bad to begin with!).
But at this point I'm just rambling. Thanks for inviting me to think about it more, sorry it doesn't feel like I've moved things along at all.
deleted by creator