• 38 Posts
  • 96 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle






  • What about addiction risk?

    The data on this are really poor because it’s hard to define addiction. If a prescription stimulant user uses their stimulants every day, and feels really good on them, and feels really upset if they can’t get them…well, that’s basically the expected outcome.

    did I just watch Scott try to reply guy addiction out of existence?

    also, all the paragraphs Scott uses to call his patients liars and insinuate that other psychiatrists have guilty consciences are really uncomfy? cause it really feels like a normal response to the situations he’s describing is “boy I’m getting a lot of folks with ADHD and neurodivergent traits and all they seem to want is one treatment for it, maybe I should examine that more closely” and not “look at all these normal-brained fucks with intense problems focusing coming to me for drugs, which I’m certain the other pill-pushers in my industry will give them without question. welp time to not even attempt to establish a therapeutic dosage or even guidelines around how much to take since this is a fun safe party drug”













  • look I don’t want to shock you but that’s basically what they get paid to do. and (perverse) incentives apply - of course goog isn’t just going to spend a couple decabillion then go “oh shit, hmm, we’ve reached the limits of what this can do. okay everyone, pack it in, we’re done with this one!”, they’re gonna keep trying to milk it to make some of those decabillions back. and there’s plenty of useful suckers out there

    a lot of corporations involved with AI are doing their damndest to damage our relationship with the scientific process by releasing as much fluff disguised as research as they can manage, and I really feel like it’s a trick they learned from watching cryptocurrency projects release an interminable amount of whitepapers (which, itself, damaged our relationship with and expectations from the engineering process)



  • What I’m trying to get at is that the practicalities of improving technology are generally skated over by aingularatians in favor of imagining technology as a magic number that you can just throw “intelligence” at to make it go up.

    this is where the singularity always lost me. like, imagine, you build an AI and it maxes out the compute in its server farm (a known and extremely easy to calculate quantity) so it decides to spread onto the internet where it’ll have infinite compute! well congrats, now the AI is extremely slow cause the actual internet isn’t magic, it’s a network where latency and reliability are gigantic issues, and there isn’t really any way for an AI to work around that. so singulatarians just handwave it away

    or like when they reach for nanomachines as a “scientific” reason why the AI would be able to exert godlike influence on the real world. but nanomachines don’t work like that at all, it’s just a lazy soft sci-fi idea that gets taken way too seriously by folks who are mediocre at best at understanding science