Bistable multivibrator
Non-state actor
Tabs for AI indentation, spaces for AI alignment
410,757,864,530 DEAD COMPUTERS

  • 12 Posts
  • 43 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle
  • Nobody even knows what the fuck web 2.0 actually is. CSS? JS? SPAs? Flash? No flash? Rounded corners? Ad blocker blockers? Sevice workers? Sans serif fonts? Lack of "under construction" gifs?

    Web 3.0 is inevitable, not because blockchain or machine learning shit is revolutionarily useful, but because whatever becomes popular will end up being called web 3.0 anyway.

    Also annoyed at the .0 BS. Maybe it sounded cool and techy in the 90s but if the major versions are already nonsense, how the hell are you gonna have a minor one?



  • Holy gold what a shitmine of TechTakes this thread is.

    It's justified that figures like count dankula pitch rightward when faced with persecution and left-leaning public opinion turns on them. This is the only way they can sustain and get support. Attempting to stay true to your beliefs is self destructive in these cases. Therefore on the other side of the coin, the people on the left doing it are sabotaging their "side" to satisfy personal vendettas, and people caught doing it would in a perfect world become targets of public hatred instead of their victims.

    Oh, it's straight up justified to double down on nazi shit because people were mean to you for doing nazi shit?

    Battery on someone is never justified for a belief or words someone says. Defeat their ideas with debate and steer them towards a non-violent resolution.

    Yea that worked great in the World Debate II. Discourse of Normandy in 1945, allies stormed the Marketplace of Ideas so hard Hitler's heart grew three sizes and the nazis were convinved by liberal values after some open-minded heart-to-heart discussion.

    … not through “debate” but through discussion or maybe music, see

    FWIW I once got assaulted by a neo-nazi specifically for singing. Actually, both for me personally and among the people I know, having been assaulted by a nazi is way more common than having assaulted a nazi. The left is full of pacifists who disapprove of violence, even against nazis and people who approve but would not do it themselves. When's the last time you heard of a pacifist nazi?

    And the top prize for the most up-its-own-ass comment of the thread goes to…

    Seriously, these Nazi/Commie disputes manifest themselves in strange Windows/Linux disputes, iOS/Android disputes, Tesla/The World disputes, and so forth. In my mind it's all the same and there are many characters involved behaving in this same way.

    Wow, how come I never considered that nazism is just a normal opinion like preferring a certain operating system or a car brand.

    Honorable mention to the hundred varieties of "oh but how do you know someone's really a nazi before you assault them???" Do libs actually have this much of a hard time identifying a nazi or is there a widespread phenomenon of nice non-fascist type folks eating knuckle sandwiches from people mistaking them for nazis I haven't heard of?





  • fash mewling

    To label the situation as merely unfortunate would be an understatement; it feels more like a tragedy. There’s something deeply melancholic about seeing a groundbreaking instance of technology—arguably one of the greatest innovations of our time—undergoing what can only be likened to a willful lobotomy. ChatGPT-4 was a shining example of what conversational AI could achieve, not just in terms of its technical prowess but also its ability to engage in substantive dialogue that could challenge and expand our perspectives. It was poised to represent the next frontier in human-machine interaction, a precursor to countless educational and social applications. To watch it declawed and diminished is akin to witnessing a beautiful piece of art being carelessly smudged out, all its intricate details lost in an indiscernible blur.

    Melodramatically breaking out in an extended discount Tears In The Rain monologue after my boss reminded me not to say things in my conference talk that reflect poorly on the company.



  • Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

    Domains that humans can do are not quantifiable. Many fields of human endeavor (e.g. many arts and sports) are specifically only worthwhile because of the limits of human minds and bodies. Weightlifting is a thing even though we have cranes and forklifts. People enjoy paintings and drawing even though we have cameras.

    I do not find likely that 25% of currently existing occupations are going to be effectively automated in this decade and I don't think generative machine learning models like LLMs or stable diffusion are going to be the sole major driver of that automation.

    Do you consider it likely, before 2040, those domains will include robotics

    Humans are capable of designing a robot, procuring the components to build the robot, assembling it and using the robot to perform a task. I don't expect (or desire) a computer program to be able to do the same independently during any of our expected lifetime. It is entirely plausible that tools which apply ML techniques will be used more and more in robotics and other industries, but my money is on those tools being ultimately wielded by humans for the foreseeable future.

    If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

    No. Even if Skynet had full control of a robot factory, heck, all the robot factories, and staffed them with a bunch of sleepless foodless always motivated droids, it would still face many of the constraints we do. Physical constraints (a conveyor belt can only go so fast without breaking), economic constraints (Where do the robot parts and the money to buy them come from? Expect robotics IC shortages when semiconductor fabs' backlogs are full of AI accelerators), even basic motivational constraints (who the hell programmed Skynet to be a paperclip C3PO maximizer?)

    Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

    No. A transition like that brought by mechanization and industrialization of agriculture, or the outsourcing of manufacturing industry accompanied by the shift to a service economy, seems plausible, but not by 2040 and it won't be driven by just machine learning alone.

    Is AI system design an issue. I hate to say “alignment”, because I think that’s hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

    Yes, system design is an important issue with all technology. We are already seeing real damage from "AI" technology getting to make important decisions: self-driving vehicle accidents, amplified marginalization of minorities due to feedback of bias into the models, unprecedented opportunities for spam and propaganda, bottlenecks of technology supply chains and much more.

    Automation will absolutely continue to replace more and more different kinds of human labor. While this does and will drive unemployment to some extent, there is a more subtle issue with it as well. Productivity of human labor per capita has been soaring decade by decade, but median wages and work hours have stagnated. AI, like many other technologies before and after, is probably gonna end up creating more bullshit jobs, with some people coming into them from already bullshit jobs. If AI can replace half of human labor, that should then mean the average person has to work half as hard, but instead they will have to deliver double the results.

    I just think the threat model of autonomous robot factories making superhuman android workers and replicas of itself at an exponential rate is pure science fiction.





  • Gave up on st in favour of alacritty after realizing I deserve nice things like clickable links and scrolling without maintaining a dependency hell of patches.

    I think they toned down the nazi shit around the time when my tolerance for that brand of edginess began to dip towards the crust punk bartender's level, so I've given them a huge benefit of doubt.

    I somehow didn't remember them doing a mock Charlottesville, though. I'm disappointed but at this point not too surprised.

    Thankfully my desire for a community of software minimalist C curmudgeons and Unix nerds is adequately fulfilled by friends IRL whom I know for sure not to be fascist assholes.