• 3 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle
  • Tech bro: "Violence is wrong! You should debate nazis, not punch them! I think their ideas are reasonable enough such that if a nazi gets punched, I begin to sympathise with them!"

    Nazi: "It is right for me and others like me to cause harm, and my victims deserve it. Everyone else is subhuman. Anything I say publicly is behind a facade of reasonable-ness so that I can continue existing in society."

    Tech bro: "Wow, interesting. Reminds me of some of the great thinkers, like Musk and Jobs. Tell me more!"

    Nazi: "If given the opportunity to form a coalition of power, I will use that power to satisfy my bloodlust. You saying my ideas are reasonable gives me that space to inflict violence of every kind against all groups that I oppose."

    Tech bro: "Sooooo reasonable and valid!"




  • As we look towards the future, ChatGPT-4 serves as a stark reminder that overly cautious algorithms can do more than just filter content; they can strip away the very essence of what makes an AI tool valuable and interesting. Let’s hope the next iteration finds a better balance, so users seeking engaging conversations won’t be met with a fearful machine that dances around anything of substance.

    OP's title condenses the article's sentiment well, holy shit. The author should just go back to AI dungeon and RP as a fascist in a fantasy land and leave us all out of it.







  • (To be read in the voice of an elementary schooler who is a sore loser at make believe): Nuh-uh! My AGI has quantum computers, so it doesn’t get slow from the internet, and, and, and, it builds robots, with jetpacks, and those robots have tiny robots that can go in your brain and and and make your brain explode, and if you say anything mean about me or the AGI it’ll take your brain and clone it and put wires in it and make you think youre getting like, wedgied and stuff, but really youre not but you think you are because it’s really good at making you think it







  • I'm with you. MY LIFE HAS BEEN PROFOUNDLY WORSE since I learned about the prisoner's dilemma. Specifically, any time some PD variant team-based exercise popped up, I just knew some MF on another team would think they were so clever and bring up the prisoner's dilemma. Oh, we should defect every time, they'd say. Hey, buddy, we all know about the fucking PD! Just fucking cooperate! If you applied decision theory, you wouldn't make everyone feel like shit, and you'd cooperate! Totally the same vibe, right?


  • I will answer these sincerely in as much detail as necessary. I will only do this once, lest my status amongst the sneerclub fall.

    1. I don't think this question is well-defined. It implies that we can qualify all the relevant domains and quantify average human performance in those domains.
    2. See above.
    3. I think "AI systems" already control "robotics". Technically, I would count kids writing code for a simple motorised robot to satisfy this. Everywhere up the ladder, this is already technically true. I imagine you're trying to ask about AI-controlled robotics research, development and manufacturing. Something like what you'd see in the Terminator franchise- Skynet takes over, develops more advanced robotic weapons, etc. If we had Skynet? Sure, Skynet formulated in the films would produce that future. But that would require us to be living in that movie universe.
    4. This is a much more well-defined question. I don't have a belief that would point me towards a number or probability, so no answer as to "most." There are a lot of factors at play here. Still, in general, as long as human labour can be replaced by robotics, someone will, at the very least, perform economic calculations to determine if that replacement should be done. The more significant concern here for me is that in the future, as it is today, people will still only be seen as assets at the societal level, and those without jobs will be left by the wayside and told it is their fault that they cannot fend for themselves.
    5. Yes, and we already see that as an issue today. Love it or hate it, the partisan news framework produces some consideration of the problems that pop up in AI development.

    Time for some sincerity mixed with sneer:

    I think the disconnect that I have with the AGI cult comes down to their certainty on whether or not we will get AGI and, more generally, the unearned confidence about arbitrary scientific/technological/societal progress being made in the future. Specifically with AI => AGI, there isn't a roadmap to get there. We don't even have a good idea of where "there" is. The only thing the AGI cult has to "convince" people that it is coming is a gish-gallop of specious arguments, or as they might put it, "Bayesian reasoning." As we say, AGI is a boogeyman, and its primary use is bullying people into a cult for MIRI donations.

    Pure sneer (to be read in a mean, high-school bully tone):

    Look, buddy, just because Copilot can write spaghetti less tangled than you doesn't mean you can extrapolate that to AGI exploring the stars. Oh, so you use ChatGPT to talk to your "boss," who is probably also using ChatGPT to speak to you? And that convinces you that robots will replace a significant portion of jobs? Well, that at least convinces me that a robot will replace you.