☕CHANGE MY MIND☕

  • MemesAreTheory [he/him, any]
    ·
    3 years ago

    I genuinely don't know. I think getting rid of the profit motive as the engine the internet runs on is a good idea, but that's a huge undertaking too, and I don't have the tech background to go about envisioning such a thing much less what might come next. No idea how we'd go about determining how to allocate server space. The anonymity of the current internet is one of its most treasured features, but I don't know if that's really healthy for human flourishing in the long-run. I'm being a bad academic right now and not citing sources - but all kinds of studies show how we're more vicious behind a veil of anonymity. Even just interacting on the computer encourages us to be meaner than we would be in person. I don't know how to solve that issue whatsoever, it can probably only be mitigated, question is can it be mitigated enough?

    I do know that there are ways we could design technology to be more humane. By that, I mean less gamified and addicting. Right now success is measured in how long an app or webpage can hold your attention and thus serve you up more ads or extract money through micro-transactions. Developers abuse every trick they can in human psychology to make that happen. I actually will cite sources here because these are publicly available and easy enough to follow up on. The Social Dilemma was a movie released last year discussing the dangers and downsides of this race to the bottom, but I think the book "Your Happiness Was Hacked" does a more robust job and is a pretty easy read. The good news is we've identified the exploits already, thus in theory all we have to do is design technology to avoid those pitfalls instead of steering us right into them.

    I guess the next step would be identifying the kind of traits we DO want to promote, and building our technology in a way that positively encourages those traits. If we wanted to hamfist it into a hegalian dialectic it'd go something like this.

    Thesis: we should use the Internet to build complex psychological models that manipulate our behavior to C O N S U M E
    Anti-thesis: actually no that's not a very good thing, we should design our technology in a way that explicitly avoids those kinds of manipulations. Yikes. Synthesis: we should use the internet to build complex psychological models that manipulate our behavior to...??? Do good things??? I guess? Yeah nebulously defined good things, that's what I'm going with. That kind of use would be good actually.

    But that almost feels like a Cass Sunstein type of "nudge" situation and I'm a little grossed out by it. Who determines what's ultimately good? How do you avoid the problem of expertise, where people who know less can't effectively evaluate the decisions of people who know more in this regard? How do we keep that accountable? There's not an ethical way of testing it, this is people's lives we're playing with (part of what makes the ad driven model so bad to begin with!).

    But at this point I'm just rambling. Thanks for inviting me to think about it more, sorry it doesn't feel like I've moved things along at all.