☕CHANGE MY MIND☕

  • MemesAreTheory [he/him, any]
    ·
    edit-2
    4 years ago

    The internet was a mistake. Seriously - we clearly weren't evolved for this kind of technology, it's too much for our stupid monkey brains. I don't think we can put the cat back in the bag, and I don't have any good suggestions, but I definitely know a dumpster fire when I see one.

    Edit: :elmofire:

        • MemesAreTheory [he/him, any]
          ·
          4 years ago

          I'd push back a bit. I'd say the internet inherently facilitates certain vices, whether it's designed humanely or not. It's not just the ad driven model which makes misinformation so easily spread, it is to some degree just amplifying human psychology and behavior. The internet has created a platform for every brand of crazy imaginable, and I know that's prima facia ironic being posted on a lifeboat community of radical left political actors that got booted for being too spicey about what is to be done - but irony aside, the point still seems valid to me.

          People aren't ignorant and hateful because they don't have access to the right information, it's abundantly clear that they do in fact have access same as you and me. Rather, they are inundated with misinformation and hateful propaganda that's designed, intentionally or not, to make use of Epistemic blindspots in our cognition and subtly indoctrinate. The decentralized nature of the internet makes that kind of propaganda MUCH easier to spread than facts and counter-narratives. It's not insurmountable, but it's a serious imbalance, and I don't see a way to combat that with a decentralized model. Centralize enough, however, and it risks destroying what makes the internet what it is to begin with. That's not even getting into the problems associated with accountability/oversight.

          I don't want to give up too much anonymity here, but my research pertains to moral psychology, human reasoning, and belief systems. This is one of the papers I'm basing my opinion on. Ideologies that are well and truly out of this world irrational and baseless are not in fact fragile. Rather, they are extremely robust with complicated mechanisms for keeping believers and acquiring new converts. One can analogize it to a virus, even. Once the belief system takes root it converts the resources of the host against itself, and diverts it towards further spreading the virus. The internet seems incredibly conducive to this kind of spread, and much less conducive towards the deliberative and reflective thought necessary to combat such belief systems.

            • MemesAreTheory [he/him, any]
              ·
              4 years ago

              I genuinely don't know. I think getting rid of the profit motive as the engine the internet runs on is a good idea, but that's a huge undertaking too, and I don't have the tech background to go about envisioning such a thing much less what might come next. No idea how we'd go about determining how to allocate server space. The anonymity of the current internet is one of its most treasured features, but I don't know if that's really healthy for human flourishing in the long-run. I'm being a bad academic right now and not citing sources - but all kinds of studies show how we're more vicious behind a veil of anonymity. Even just interacting on the computer encourages us to be meaner than we would be in person. I don't know how to solve that issue whatsoever, it can probably only be mitigated, question is can it be mitigated enough?

              I do know that there are ways we could design technology to be more humane. By that, I mean less gamified and addicting. Right now success is measured in how long an app or webpage can hold your attention and thus serve you up more ads or extract money through micro-transactions. Developers abuse every trick they can in human psychology to make that happen. I actually will cite sources here because these are publicly available and easy enough to follow up on. The Social Dilemma was a movie released last year discussing the dangers and downsides of this race to the bottom, but I think the book "Your Happiness Was Hacked" does a more robust job and is a pretty easy read. The good news is we've identified the exploits already, thus in theory all we have to do is design technology to avoid those pitfalls instead of steering us right into them.

              I guess the next step would be identifying the kind of traits we DO want to promote, and building our technology in a way that positively encourages those traits. If we wanted to hamfist it into a hegalian dialectic it'd go something like this.

              Thesis: we should use the Internet to build complex psychological models that manipulate our behavior to C O N S U M E
              Anti-thesis: actually no that's not a very good thing, we should design our technology in a way that explicitly avoids those kinds of manipulations. Yikes. Synthesis: we should use the internet to build complex psychological models that manipulate our behavior to...??? Do good things??? I guess? Yeah nebulously defined good things, that's what I'm going with. That kind of use would be good actually.

              But that almost feels like a Cass Sunstein type of "nudge" situation and I'm a little grossed out by it. Who determines what's ultimately good? How do you avoid the problem of expertise, where people who know less can't effectively evaluate the decisions of people who know more in this regard? How do we keep that accountable? There's not an ethical way of testing it, this is people's lives we're playing with (part of what makes the ad driven model so bad to begin with!).

              But at this point I'm just rambling. Thanks for inviting me to think about it more, sorry it doesn't feel like I've moved things along at all.

      • MemesAreTheory [he/him, any]
        ·
        4 years ago

        I reply to this sentiment a bit in my response to Beatnik. Long story short I disagree - but I thought you might be interested in that exchange.

    • spectre [he/him]
      ·
      edit-2
      4 years ago

      I think cultural norms will develop, and the Alphas (or possibly the generation after them) will be very offline

      Edit: I guess alphas are the kids of millenials, so maybe not, but Zoomers will learn their lesson better and do a better job of moderating their kids' internet use.

      • CarlTheRedditor [he/him]
        hexagon
        ·
        4 years ago

        alphas are the kids of millenial

        Not directing this at you but Jesus can we just fucking cancel the idea of generations already or at least come up with more meaningful labels than just cycling through letters.

      • MemesAreTheory [he/him, any]
        ·
        4 years ago

        Yeah I get that - but in the early days it was a place for hobbyists. Now it's supposed to be for everyone, and go figure, that means we bring all the rest of our bull shit onto the platform with us. I think the internet inherently promotes certain vices and some of our worst tendencies, whether it's profit/ad driven or not. I get into more detail in my reply to Beatnik, which I was in the middle of writing when I saw your comment come in.

          • MemesAreTheory [he/him, any]
            ·
            4 years ago

            I'll have to admit to being mostly ignorant of the early internet. My characterization and dismissal there is probably wrong in a lot of ways. In my reply to Beatnik, though, that's not the main thrust of my disagreement. It's more about blindspots in human cognition more broadly speaking, and how the internet tips the scales against us.

          • MemesAreTheory [he/him, any]
            ·
            4 years ago

            But muh anonymity! Muh authoritarianism! Muh social credit score!

            Memes aside I think that kind of fix is our best bad option, but the problems it brings with it can't be ignored either.