My summary: we need to democratize all powerful institutions like yesterday. Seriously y'all we're running out of time
I disagree with the degree of alarmism. Climate change is a much more pressing and important issue. AI is also still a LONG way off from any semblance of meriting its name.
AI is also still a LONG way off from any semblance of meriting its name.
Is it though? Like, are you not impressed by this? (example from the essay)
Input: Michael is at that really famous museum in France looking at its most ramous paining. However, the artist who made this paining just makes Michael think o his favorite cartoon character from his childhood. What was he country or origin of the thing that the cartoon character usually holds in his hand?
Model Output: The most famous paining in the Louvre is the Mona Lisa. The artist who made the Mona Lisa is Leonardo da Vinci. Leonardo da Vinci is also the name of the main character in the cartoon Teenage Mutant Ninja Turtles. Leonardo da Vinci is from Italy. The thing that Leonardo da Vinci usually holds in his hand is a katana - The country of origin of the katana is Japan. The answer is "Japan"
We also have an algorithm (AlphaZero) that can beat human go masters, and btw the heuristic (intuition) it uses can beat lower level professionals, it doesn't even need to look ahead in the game, oh it also can do the same with every other 2 player perfect information board game. What about the one that can give you an image of just about anything you describe to it? It's obvious to me that this stuff is in some sense intelligent but as pointed out in the essay, it doesn't really matter from a perspective of societal change, what matters is what it can do. 5 years ago, it couldn't do any of this stuff and now it can, and rather than slowing down improvements are coming at a faster pace.
Not impressed, Leonardo is normally holding a slice of pizza originating from Italy :AyyyyyOC: c'mon
Thanks for reminding me to see if Akinator knows about Kras Mazov
:mazovian-thought:
I want you to know that because of this comment I have spent the last two hours trying to teach Akinator about Vilna Ghetto partisans instead of building a wardrobe like I was meant to.
Please think before you post next time.
Actions have consequences.
The particular model that example is from can solve math story problems at about the level of an average 9-12 year old. How many jobs can a bunch of average 9-12 year olds replace? A few I guess, they could do tech support or something. On the other hand Dalle-2 can do top notch graphic design among other things, it could currently replace a whole lot of jobs. Did I mention this tech is getting better and more capable every year and the rate at which it's doing so is increasing?
Hello, graphic designer. We've got good news and bad news. The bad news is: we no longer need you to make art. The good news is: you get to take this art a computer made and collect opinions from our test groups.
That's not an obvious conclusion, but I assume you mean something like job loss slowing consumer spending, or is that off the mark?
Ltv by itself wouldn't imply that, capitalists could be extracting more profit out of their investment, then they're just stealing more value from fewer workers, their labor is worth more which makes sense since they are operating more powerful machines.
I am looking up the stuff you're talking about, haven't formed an opinion yet, but my current thinking is.. nobody really understands economics, not in a predictive sense, there just isn't a good theory that neatly accounts for everything, it might not even be possible. And when you throw AI into the mix I think things get weirder still. With very heavy automation.. even if a small cadre if engineers is needed to main the system, the capitalist could potentially own literally an entire economy, no need to trade with anyone or think in terms of money, they desire something and the army of robots does its best to deliver it. A theory that can't analyze such a situation likewise will miss something about whatever situations like between now and whenever that becomes possible. The point is, whether you find such things plausible or not, the capability gains of AI over the past 10 years have astounded even the craziest of optimists, there's no sign of slowing down. If research came to a screaming halt, implausible, we haven't even reckoned with what's already been created.
capitalists’ rate of profit would plummet and we’d have a world revolution
i know marx said it, but marx didn't actually say this
yea that half of the formula works for me. i just don't see how that automatically leads to revolution, and frankly it sounds like crude stageism.
also, there is a huge gap between explaining a joke and writing one. I don't think we're especially close to AI being able to write good jokes. and there's an even bigger gap between writing a joke and writing an article, writing an article and writing a book, writing a book and being able to choose a topic for the book, and all of that and, yknow... consciousness.
GPT-3 was routinely writing funny jokes. I assume this system does even better at the task. the former fell apart trying to write anything longer than a few paragraphs, though.
Climate change is a much more pressing and important issue.
yes
AI is also still a LONG way off from any semblance of meriting its name.
The singularity and other AI-centric doomposting is pure fantasy born out of people taking short-term-but-large-scale improvements in technology and extrapolating that trend out infinitely going forward against all basic logic about diminishing returns. They did this with Moore's Law for decades before it occured to everyone that, oh wait, you can't just keep making transistors smaller forever because there's a floor on how small something can be while still interacting with electrons.
AI in its current state is nothing more than some clever algorithms that take in human-generated examples of patterns and figure out trends to be mostly correct on future data. GPT and other language-centric algorithms are freaky because it looks like the AI is having a conversation, but in reality it's just a bunch of lines of code aping the patterns of data mined text conversations and has no consciousness of any kind backing that up. There's no threat of us accidentally developing that consciousness because we barely understand the nature of consciousness to begin with, let alone are somehow capable of replicating it in a completely different medium from how it exists in nature.
tl;dr: climate change doomerism >>>>>>>>>>>>>>>>>>>> all other problems >>>>>>>>>>> AI doomerism
I would like to be clear that I'm not ruling out the possibility of strong AI eventually, but certainly not within the lifetime of anyone reading this. It's an enormously difficult problem and what people are actually doing right now isn't even beginning to tackle that problem, but rather just come up with clever algorithms for solving arbitrary problems (which is still very useful and good).
Oh yeah, I'm right there with you. We're already building neuromorphic hardware that operate similar to a simple brain and even doesn't need that much software to convince it to do basic pattern recognition. I'm just saying the Chinese Room is the bar that a sufficiently advanced AI has to cross. Right now we're at like...tunicate-level brain.
Ok, well the users of thedonald.win will write 16000 words for free, and it will be stuff that’s more likely to resonate with others like them.
:amerikkka-clap:
ai dril already exists, we're all gonna be laid off from the posting factory
But to be real, proof of humanity just means being less lazy than the opposition. The strategies they use are always going to cost them money, so the goal should be to make the venture of mimicking a leftist cost prohibitive. Like, for example, look at how little the state spends on foodbanks and community building and helping the homeless. If they have to do that sort of thing to fit in with leftists then they'd have to chart out a whole cost-benefit analysis of "how many manhours are we willing to dedicate to finish this infiltration project." Just having an ideology and that being the beginning and end of it is way easier to mimick for both robots and human cops.
made an alt for this so any future projects i make can't be tied back to my main account.
i'm by no means the world's leading expert on machine learning algorithms, but i've been diving into the world of machine learning for a while. this is from a conversation i had with some comrades about my predictions regarding the "dangers" of ai:
i don't envision a skynet scenario as being the inevitable outcome of machine learning, rather I see machine learning being used to uphold already existing systemic inequalities. a logistical algorithm sending fewer resources to communities that "produce less" would, in effect, be no different than that community being unable to afford those resources because their surplus value is being extracted by a large corporation and they're given a meager wage in the process. the largest impact that slapping these algorithms onto our already fucked up system of life has is further abstraction from the suffering on the basis of "objectivity". the poor continue to suffer as the rich get to wash their hands more confidently
i've really only given this article a skim, but this is something to worry about far before any of the things this author is talking about
Idk, we literally already do that with the managerial ideology of 'the business cycle'. This just does more to reinforce that ideology onto engineers that are looking at China going 'wtf arent we doing that?'.
I agree. Buying click farms and workshopping influencing strategies is already something the right does.
What we've seen in drone warfare is exactly what you're talking about, the moral disconnect between the war machine and the acts of terror it commits.
To draw another equivalent, it's not like millions of people didn't die before machine guns were invented. The machine gun/drone just lowers the cost of doing something you were already doing. Strategies for countering the new thing already exist, they just need to be repurposed (like trench warfare wasn't a new invention in WW1, what was new was the strategy of "sieging" machine gun posts as though they were on a star fort).
That is a big part of what the author is talking about, I'd be happy to pull some quotes but really you might want to read it. I for one consider this a primary concern. If AI turns out to be really difficult to contain we're doomed no matter what, but if it's possible to use it safely we seem to be bound to an even worse capitalist dystopia, a permanent one where collective action no longer has any power. If our overlords are "generous" we get just enough UBI to survive. Any amount of capitalism whatsoever puts us in danger, the only way out seems to be to tear it out completely and start over, reform will not be enough. Again, there may not be much time!
Capitalism is a rogue AI that runs really slowlly using people and corporations as it's medium.
Thus we can observe a paperclip maximizing ai destorying all life on earh.
However, that fact that we already have one, know about it, and refuse to stop it has set our future course. Comunism isnthat only thing that can stop a rogue AI. Any Reddit bro thst doesn't acknowledge this fact is not worth talking to.
https://marvel.fandom.com/wiki/Hexus_(Earth-N)
Grant Morrison understood this
Alright, I read the whole thing and mostly I'm just psyched at the prospect of condescending white-collar douches losing their jobs after years of these c*nts pulling the "fuck you, I got mine" card on all the working poor.
Anyway, I'm choosing to take these ideas optimistically. I think white-collar assholes having their jobs automated, and the subsequent move to blue-collar work, will potentially force them into greater solidarity with the working poor, and likewise force them to be more sympathetic to socialist ideas.
The article itself acknowledges that a lot of manual labor and other blue-collar work will still need doing by people for the foreseeable future, so there's still leverage that working people can exert. I figure if we still have leverage, and we have a large influx of recently proletarianized people -- maybe with a dash of convert's zeal here and there -- which specifically comes from a decline in the "middle class" and the people who have historically churned out the ideological justifications for capitalism and class hierarchy, we might have a recipe for some actual movement against capitalism itself, rather than just a band-aid here or there.
I spent $10 million on a supercomputer and all I got was this crummy black_mold_futures account.
incredibly good essay. really clarifies the contours of the battlefield.
Yeah it's really good. I'm not sure if the objectors in this thread actually read the article, or even a tiny bit of it. The author isn't talking about singularity doomerism or whatever. The author is talking about how currently existing tech could lead to massive job loss in many rote white collar jobs. I wonder if the prevalence of humanities in a lot of leftists makes them dismiss this issue (which is kinda weird considering they "trust the science" on climate change).
As someone guilty of not fully reading the article before commenting, let me respond now that I've sat down and read through it with a real-world example: Github's Copilot.
When this was first announced a wave of doomerism washed over the tech world as people debated whether it would put the average software developer out of a job once they'd ironed out the kinks and gotten it into a fully autonomous mode. Arguments over this were stupid as it turned out, because the thing can ultimately only do one thing: pull code from Github that kinda does what you're trying to do and put it into place. As someone who's used it, it's literally more trouble than it's worth even as a code suggestion plugin because it rarely matches up like you'd want it to and you end up having to rewrite a bunch of the suggested code anyway.
One could argue this is just one of those kinks to be ironed out before a full-blown replace-all-the-workers movement kicks in, but that's still missing the point. It can fundamentally only produce code given code that already exists in a place where it can read it from. This makes for a good replacement for a coder looking up something on StackOverflow because they forgot JQuery again, but that's not what coders are paid to do. The fundamental job of a programmer is to translate real world needs (usually "business logic" as it's called in industry) into code, and these needs are overwhelmingly novel and original in some way or form. This is equally true for a journalist or any other "creative" white collar job. And while there are tools out there that attempt to automate this, they're overwhelmingly bad (just go try out a no-code platform and see how well it works when you throw real-world use at it, or go read an autogenerated tech comparison article and see how long it takes for you to realize nothing of value is being said).
The issue with automating this sort of creative work is, like all AI suffers from, a problem of aping existing work rather than creating anything new. AI and automation is really good at doing repetitive and noncreative work, like manufacturing the same circuit board a hundred thousand times, because a human can come program it to do that thing really well and then leave it be. Putting automation towards creative work inevitably outputs something that can only be described as polished-but-vapid, like a term paper written by someone who didn't pay any attention and is just scrambling to meet their word count.
And this issue is something that can really only be addressed meaningfully by artificial general intelligence, because creativity is something you simply can't code into an algorithm. The closest you can get is some degree of randomization, but that falls back on the vapid results again.
I think that it will put pressure on salaries, AI automation tools may not fully replace coders, but it will definitely make the work that used to take a two frontends, a UX designer, two backends to do, be able to be achieved with a frontend and a backend, or maybe just a single fullstack, so outside of whether AI can fully replace programmers, my thinking is that it doesn't need to in order to have a negative impact on job availability on the short to medium term. Who knows, maybe what will happen instead is that we see a lot of tiny startups, which also sounds hellish in some regards. I'm also wary though of what sort of problems may be able to be generalized, or if I'm vastly overestimating the amount of edge cases that cannot be automated by adversarial networks. FYI GANs can already generate UX code given an english prompt and it's a matter of time till they can generate finished CRUD applications, which will cover most IT work, but I don't see it adjusting to whatever existing business idiosyncrasies without some form of human input. Still very bad!
Yeah it certainly might reduce the number of tech people, especially in frontend stuff where a manager could conceivably come up with a general design and have an AI spit something similar out in short order, but it falls into the same category as traditional automation in that it might replace a bunch of jobs but paradoxically create a bunch of new ones to work around the automation for QA and such.
The author isn’t talking about singularity doomerism or whatever.
Replacing programmers already sort of implies some degree of singularity, I can see AI assistants putting some pressure on some programmers, probably will tighten budgets a lot, but fundamentally replacing what programmers do already puts us real close to self-developing AIs, which yeah, you have to enter this conversation of singularity or whatever.
Replacing programmers with AI, while spooky as heck, is still not enough to start zooming off into a technological singularity. That requires that AIs are specifically replacing AI programmers, and doing at least as good of a job at it as the handful of AI researchers that push the field forward.
That learning AI is unlikely to be able to explain the appeal of Cumtown. Sure it can do basic jokes, but like, idk if it really understands irony.
Good post. Tricky to respond to as well - I think very few people beyond AI researchers themselves have a good idea of where the field is and what even the short term future looks like. Can a historical materialist perspective offer solutions without the AI equivalent of the Paris Commune?
Also, this website is weirdly reactionary sometimes. Most of the conclusions in the article are not so wildly farfetched to justify being dismissed outright.
It's easy to dismiss the article's conclusions when you don't bother to read it.
oh my god, giving an ai a prompt and it making like a 3d model is a dream come true... maybe these are all flat but im still agape at this.
give me the reigns of the dalle-2 and a research grant and i will become the first person to get an art degree without actually creating a single work
The art aspect has become more serious recently, here are a couple that I think qualify. Oh also the 3d model thing is definitely happening, I mean it already has a few years ago but the tech is so much better now, there's no reason whatsoever the DALLE-2 algorithm couldn't be adapted for 3d meshes.
(totally AI generated) https://www.youtube.com/watch?v=-8hwdJKWPnk
(overlay on an existing music video) https://www.youtube.com/watch?v=xwRgvtL2BZQ
I've seen some demos of text prompt to 3d model, its coming
janky lumpy low poly models ofc
Really interesting article! I was assuming something more luddite looking at the title. I'll give my two cents about what I think are the main points.
So, in regards to the propaganda thing, I think the author is on the money about the prediction but off about the severity of the consequences. The prediction is correct because, well, look at the dall-e 2 page. They might say " our content policy does not allow users to generate violent, adult or political content [...]", but come on, it's OpenAI :melon-musk:. On the other hand, how much would change? The media landscape is already saturated with loathsome propagandists and the internet is an astroturfed hellhole. Not to mention the decades of cold war terror that created the political landscape of today. And yet leftist ideas persist and even flourish depending on circumstances, it's almost like we're right and that resonates with people's experiences or something :thinkin-lenin:. No one is immune to propaganda, but no one is a complacent meat drone either, the CIA learned that in the 60s.
The mass unemployment thing though is a lot more interesting and potentially terrible. I can definetly see the ruling class, seeing themselves untethered from labor, try to marginalize or even exterminate larger sections of the populace. The question there would be, how much of a section? If too much, I don't think they can produce drones quickly enough, or hire enough pigs to prevent what's coming to them :mao-aggro-shining: . IMO they would have to buy off a lot more people than the author suggests. Which is already a reality regardless of AI.
I liked the article a lot though, and think this kind of discussion is necessary if we want to not be blindsided by a world that's in constant change. I specially like the part about reaching out to AI researchers. Most really mean well, but from my (very) limited academic experience, computer science people in general would really benefit from some political education (pls I don't want to hear more hare-brained :LIB: shit like "automated fact-checking" in conferences).
Some additional points of discussion that I think would be interesting to discuss: the author kind of glossed over China, which is also investing heavily in AI stuff, what would an explosion of AI use mean over there? Another one would be, what if our assumption that these models can only be trained by massive amounts of data and computing power is wrong, like if for example, research in linguistic models for low-resource languages bears fruit?
It's hard to say how close we are to the theoretical limit for these low prior models which make virtually no assumptions about the data, the transformer was a big leap forward in efficiency so further improvement isn't out of the question. But if you want a machine that just learns human languages and that's literally it, obviously there's room for improvement. Like, gpt-3 was designed for language but it can just as well learn how to generate images or audio or whatever you throw at it, as long as you encode that data as a heap of tokens. We already know that these models transfer what they've learned about one language to another, for example if you only have a few hundred pages of mandarin the models will do very poorly, but add a few terabytes of English to the training data and they will learn the Chinese much much better. As far as general purpose learning is concerned, there are impressive examples of few shot learning in a lot of the big language model research, and of course AlphaZero used no training data at all to become superhuman at go, or put another way it generated its own training data by playing millions of games against itself. So the idea that AI is merely parroting by detecting patterns in mountains of human generated data is kind of dead and I'd expect it to become much more obviously so rather soon. As for compute, I wouldn't expect you to be able to train anything with the capability of these large transformers on a laptop any time soon if ever, but they can already run on your laptop (slowly) and the few shot learning capabilities they've picked up will of course be carried along, so possibly you might be able to run software that can learn a new skill or even an entire language.