Removed by mod
The advancement of AI scares me.
Removed by modAI couldn’t do this a year ago, it required computer hardware that was supercomputer levels of expensive to even create something like this. IMO development was actually held back by crypto and covid-19. Now AI is the #1 focus of the techbros and it isn’t going to slow down. This shit is going to put so many journalists, artists, and even programmers out of work. I don’t know how else to explain this, HUMANITY LITERALLY CREATED AI THIS YEAR. WE MADE FUCKING SKYNET!
You want to talk about technological progress, this shit mogs fusion, it mogs the vaccines, it mogs whatever dumb space colonization shit we did. We made fucking AI! I bet we will have sentient AI in our lifetime. And what are we going to do with this stuff? Porn, lots of porn. Deepfakes of celebrities and politicians sacrificing children to moloch, dead actors staring in new movies, a new album by Tupac, fake war footage, fake everything.
Have you ever heard about how a monkey can write Shakespeare given enough time? We have fucking done that, we pressed random buttons enough times that we ended up with something legible. Now we turn memes into real people.
We need a butlarian jihad or some shit.
this isn't close to sentience technical image processing to look like a person is not nearly as difficult as thought or having ideas.
It might lead to more easily fakeable news but the news is already inundated with lies and nonsense
Yeah, and while AI couldn't do this specific thing until this year, this isn't an AI revolution so much as small improvements over what it has been able to do for 10-20 years. "Make an AI remake an image as if it's a van Gogh" has been in deep learning courses for years now, and I don't see that being significantly different from this. As for putting artists out of work, I'm not convinced movies and video game studios are going to ship stuff done by AI, probably AI generated assets will just get incorporated into stuff like photoshop as a base for artists to work from.
It might be used a bit like CGI but studios also might well respond to a new source of effects by just including even more effects
Hopefully it just cuts down hours of artists, a bit of automation where people get worked extreme hours could be a net good for the workforce. They'll probably need to organize and fight for it still.
Exactly the issue. Creatives will be needed no matter what, it'll just increase the work load as capitalists adapt to the new heights they can push the worker.
As a drudge copywriter myself, AI is currently an amazing fucking tool. But the issue will be when capital realizes I'm not being exploited hard enough.
The tool itself could be amazing, it's heartbreaking it'll be used for ill.
under capitalism a cut in hours might also mean a cut in staff and the rest just work the same amount of time.
my optimistic take here is they might continue with effects and just include more effects maybe even hiring more artists to work on films and hollywood as I understand it is actually pretty well unionised although I could be wrong about that
Hollywood is pretty well unionized, but I believe cgi studios aren't. Part of the theory of why cg gets used a lot in places where practical effects and set construction might look better is that it's essentially a union busting tactic.
damn hope they unionise. still though this is basically a new form of CGI that I don't think will drastically effect livelihoods and might even result in some cool cinematography and as a component in some neat art
under capitalism a cut in hours might also mean a cut in staff and the rest just work the same amount of time.
this is what happened with electronic spreadsheets and some other stuff because those guys didn't have unions or socialism.
there are guilds and shit in the arts, hopefully people who want to do that shit for a living strengthen their class solidarity
I'm thinking of those tabloid papers with the photoshopped covers that used to be all over grocery store shelves.
also we shouldn't really worry at all about sentient AI so much as sapient AI. Computers sense electrical voltages to function. Computers have been able to see and hear for decades.
fucken scifi writers fucking up terminology a hundred years ago.
I bet we will have sentient AI in our lifetime.
You will lose that bet unless the immortality losers figure something out. The "AI" we have is no more sentient than a list of numbers on a page, and will continue to not be sentient regardless of how big the page is. Unless you take the, in my opinion, very dim view that sentient beings are still just stimulus response machines, in which case your lightswitch is sentient as well.
Machine learning algorithms will be sentient in the same way capital is. A neural network functions almost identically to human social systems. Where the data is the historical initial conditions, the trained "brain" is the mode of production, and the output is the productive sum of the historical conditions and the instantaneous social order.
You could say that the whole of society is intelligent, but it's a different kind of intelligence to individual intelligence. The historical data that these systems (both society and machine learning) are trained on are built on the outputs of vast amount of human systems while the human system is built on a vast amount of interactions with the material world.
Machine learning is incapable of (at least now) replicating the human experience because it's nerve endings don't understand heat, cold, pain, light, etc, they understand written language. A system designed to form a social being that's above all humans but totally separate from them.
This dynamic is also why machine learning will be absolutely crucial to any and all centrally planned economies in the future. No human is able to process the amount of raw data that a specialized machine can, and no machine can process the vast amount of sensory inputs a human can. And as these machine systems begin to get closer to the levers of social order, the conditions for revolution will begin appearing more frequently and aggressively. Suddenly the social god has buttons you can press and code you can modify, and the working class has 200 years of experience pressing buttons.
You could say that the whole of society is intelligent, but it’s a different kind of intelligence to individual intelligence.
i think that using the same word for those concepts is probably suboptimal
It is, but that was kinda my point that "Artificial Intelligence" as we have it now is a control system that functions almost identically to societal modes of production. So the absurdity of calling the whole of society "Intelligent" in the same way we call humans intelligent is nonsense.
Machine learning is a control loop that's optimized to return an output for a series of inputs, in the most common case, text strings to text strings or text strings to pixel values. Capitalist society is a control loop that takes labor and resources and outputs profit.
Humans are more complex as the inputs and outputs of our biological systems are essentially infinitely less binary. Hypothetically you could use machine learning systems to simulate every nerve ending and biological state of the human body, but as of now our best machines can't even do that for a single cell.
very dim view that sentient beings are still just stimulus response machines
is there a materialist alternative to this view? Sentience to me seems to be an emergent property of a very complicated biological machinery, and if that machinery becomes damaged, so does the sentience. But I'm open to hearing your thoughts on the matter.
I mean, materially there probably isn't even free will, there's no way to make that suddenly pop out of particles. I just don't bother factoring that into a worldview that actually affects the lives of other people, for the same reason I discard what could very well be biological fact that we are all stimulus response machines. And in fact I agree with you, treating free will and sentience and any other hard to pin down concept as an emergent property is probably the right way to do it - it's like a phase transition.
My issue with applying those things to any kind of AI running on a von Neumann architecture is really just that I'm not convinced that system, which regardless of what AI nerds will tell you is still a billion times less complex than a nervous system, can ever hit that phase transition. Whether that's a difference in kind or just in scale, I can honestly say I have no idea.
But more than anything, I think we as a species are so desperate for something like us, we're at a real risk of anthropomorphizing what is 100% just a function spitting out what it was optimized for.
No idea if any of the above is particularly coherent or even a good argument, and of course this is putting aside all the actual concerns with the AI we have, re: intellectual property and accuracy and labor and everything else. Thanks for reading either way
But more than anything, I think we as a species are so desperate for something like us, we’re at a real risk of anthropomorphizing what is 100% just a function spitting out what it was optimized for.
There was a poll a while back that asked people what they thought of and expected from artificial intelligence. Apparently, the majority of people said it would be nice to finally have someone to talk to.
Thanks for sharing your thoughts.
My issue with applying those things to any kind of AI running on a von Neumann architecture is really just that I’m not convinced that system, which regardless of what AI nerds will tell you is still a billion times less complex than a nervous system, can ever hit that phase transition.
not for a while, certainly, but I think what we have created is already so much more complex than what we would have imagined ourselves to be capable of only a few centuries ago.
But more than anything, I think we as a species are so desperate for something like us, we’re at a real risk of anthropomorphizing what is 100% just a function spitting out what it was optimized for.
I think we've already done this with ourselves, and that it's not necessarily a bad thing. Humanization is the opposite of dehumanization. I think sufficiently complex material phenomena that show signs of self awareness, pleasure, pain, empathy, and so on, deserve to be treated with respect and dignity, and not mistreated or used like an object. Since animals and humans deserve that, so would any emergent artificial intelligence (which I still think is a very very long way off, but possible through several avenues)
what we have created is already so much more complex than what we would have imagined ourselves to be capable of only a few centuries ago.
Centuries, sure. My only quibble here is that it's really not that much more complicated than what we would've thought of 100 years ago. Neural networks have been around in theory since at least the 70s, and all the math behind this is even older. I don't exactly know when gradient descent was invented per se but it's really just an extension of what has basically been done since calculus was discovered. This isn't really meant to shit on people doing ML and AI work because they absolutely are doing complicated work and finding innovative solutions, but I just think sometimes we lose sight of the fact that, at it's core, these programs really are just big matrices. And the only thing that makes them particularly more powerful than what we've known about for decades is scale - it's not any kind of fundamental change in "complexity" in my mind. The engineers are doing great work combining different dedicated AIs and wrapping them up nicely, but to me that's sort of a level outside any kind of real "AI". But who knows, maybe that's the "emergent property" we were dicussing.
I think we’ve already done this with ourselves, and that it’s not necessarily a bad thing. Humanization is the opposite of dehumanization.
I do like this point. And if some day there is some AI I am convinced has some kind of genuine feeling in the same way as a real person (including non-human persons, as it were), believe me I will be right there fighting for their rights. I just cannot imagine that will be possible without some new fundamental understanding of what consciousness is, and a fundamentally different architecture for it to arise from, and I'm not willing to say we need to give what is effectively a big math equation a nap. Not that I'm implying that's what you're saying just, it's something I hear from people who I feel aren't really examining this issue critically.
Thanks for the reply
there's the term sapience as in homo sapiens, and a bunch of non-human animals for short of that as well.
some SF writers used sentient life to refer to aliens a long time ago and people just rolled with the term even though they usually mean something more specific than what plants and termites have going on internally.
Hexbear stop becoming doomers over Latest Tech Fad #37 challenge (impossible!)
AI can only make art by stealing existing art and remixing it, which is legally questionable (in a "Disney is going to pull out the lawyers and lobbyists" sort of way) and also still requires people. News has been written by algorithms for a decade now anyway. Programmers don't just write arbitrary blackbox functions for a living, you need a person to actually architect a system and figure out how data flows through it and perform meaningful actions which AI cannot do.
AI will not be sentient in anyone reading this's lifetime. This is all a jumped up chatbot. Please stop turning into the computer equivalent of antivaxxers because someone posted an article about someone's art getting stolen by a grifter AI, it's not that big of a deal.
News has been written by algorithms for a decade now anyway
as a pro tip if you write an article and give it to a reporter a lot of them will barely even read it before publishing it
Hexbear stop becoming doomers over Latest Tech Fad #37 challenge (impossible!)
:yea:
Nah, we didn't create this in a year. People have worked on this since the 1960s. They got significantly better around 2014 when people switched to using GANs, and the widespread release of latent diffusion models released in the past year (that we've also been working on since 2015 or so) was another huge change. They also aren't close to being AI in the sci-fi sense, AI is basically just a marketing term in this case and it's probably more accurate or less misleading to just call it machine learning or deep learning.
It's likely that people are overestimating how much this will put people out of work. These things can't really create anything novel without a lot of manual guidance, which is a fundamental limitation as of now and it's not certain when that may be overcome. Mostly it combines concepts well, and as you get more specific it starts to make more and more mistakes. Text gets incoherent and inaccurate, images with too much in the prompt end up very distorted if they even follow all of the prompt at all, code that's more complicated than what you might find in documentation samples is unlikely to run.
There is quite a lot it can do to make someone already in one of these positions able to do more. AI upscalers are amazing, people have been using them for years now to restore old videos with great results (they were also trained off of unlicensed works and yet nobody complains about that). You can also get a lot more out of AI image generators if you actually have some amount of art skills -- if you know how to compose a scene or draw hands better than the AI, you can sketch those things out and let it fill in the details, so you can make up for its deficiencies. As far as text models go, I don't think we're at the same point of progression for those as we are for image models, and I would think we may need a breakthrough on the level of what latent diffusion did to supersede GANs. GPT-3 is pretty great compared to other text models but its hardware requirements are astronomical. Half the reason that AI image generation is so widespread right now is the fact that you can run or train it on widely available consumer hardware, meanwhile the hardware to run GPT-3 (if the model weights were even public) costs as much as a luxury car. You probably won't see anything too flashy until (and unless) that is solved.
We need a butlarian jihad or some shit.
"The enormous destruction of machinery [...] known as the Luddite movement, gave the anti-Jacobin governments [...] a pretext for the most reactionary and forcible measures. It took both time and experience before the workpeople learnt to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used. :marx:
(marx said butlerian jihad is canceled, sorry i don't make the rules)
we really do be just rehashing everything marx said but in meme format :monke-beepboop:
IMO development was actually held back by crypto
how? Crypto devs are no more likely to be machine learning researchers than any other devs.
HUMANITY LITERALLY CREATED AI THIS YEAR. WE MADE FUCKING SKYNET!
literally did not. I am begging you to take a machine learning course
Deepfakes of celebrities and politicians sacrificing children to moloch, dead actors staring in new movies, a new album by Tupac, fake war footage, fake everything
Except for the Tupac album we've had all this stuff for years. Musicians have "holograms" that go on tour, deepfakes have been around for years, fake war footage can be done the old-school way. It's like getting all worked up about the invention of lying.
The recent "progress" from openAI has mostly been throwing insane amounts of compute and data at the same old thing. That's a change in funding, not scientific progress. Encourage you to watch Caleb Gamman's Cybergunk.
I think the crypto bit is more referring to the role of crypto in silicon shortages.
Sure did make it easier for me to get into machine learning shit when Etherium killed GPU mining right around the time the 4090 got released.
oh yeah it's a great technology for making ugly things uglier
We made fucking AI! I bet we will have sentient AI in our lifetime
There are two types of "AI".
The first is the AI of science fiction; man-made sentience. The sort of thing Isaac Asimov was writing about. The second is called "AI", but it is actually just statistical mathematics that's been applied to computer programming. The latter does not necessarily lead to the former. The only connection we know right now is the shared name. In fact, there is no guarantee that a sentient computer is even possible. There are many types of math problems that are not computable. For that matter, there are many physics problems that cannot be computed. We still do not know the nature of the physical phenomenon that gives rise to our own sentience. It could very well be that sentience is another of those physical phenomena that cannot be re-created in an algorithm. If we were to make a sentient machine some day, it will probably look nothing like the Turing machines we know today. Check out The Emperor's New Mind by Roger Penrose if you want to read more about this.
While the output of modern (sigh) AI is very impressive, there's nothing "behind" it. It's like looking for a mind behind evolution; nothing is being "selected", individuals are surviving or dying incidentally. But over the course of unthinkable stretches of time, what survives looks as though it's been actively selected. The output of "AI" is similar -- it's been preprogrammed with certain rules and, with enough calculations and the right mathmatical framework, it yields results that look as though a mind has made them.
What kind of physics problems can't be computed? I thought the discussion was about efficient computation, not computability in general. Even if it takes exponential time, can it still be computable, like by simulating every particle and pressing Go. (Assuming we have a realistic enough model)
A lot of thermodynamics, as best I understand it. Doesn't matter how efficient the computation is, there are some problems that have no 'end'. The computer will just keep calculating without stop.
Its honestly absurd to think we'll have a sentient AI in our lifetimes. Nothing incentivizes a company to make one. Nothing about the structure of our research and economic development incentivizes it. It would need a massive, centralized undertaking in a completely different economic system for it to make sense. It would need to be a generalized AI of some sort that folds it all in to one AI, but why would we even begin to make such a thing without massive, cross country central planning? Why would a specific non-generalized AI be bad for our purposes? And we already have enough issues with international projects like ITER as is.
this is silly. AI isn't magic. it isn't going to achieve anything and everything just because you're capable of imagining it. it is not "intelligent" in the sense that an animal is. current AI technology can recognize and reproduce patterns. that's it. everything it has done so far has been some variation of that parlor trick. it will revolutionize production, but only insofar as capitalists will accept a much lower standard of production. deep fakes are a real concern, but again, just an acceleration of the collapse of consensus reality that already began with the invention of mass media.
the monkey thing is actually an apt metaphor. just like a monkey, an AI will produce reams and reams of useless gibberish before finding something close enough to acceptable.
2 things , 1 if ai is everything you claim it is then there's no way it stops from poisoning its own training data because ai art will be all it can find. Without knowing why a thing was chosen, "hmmm this would be cool as an ai rendering" aka human sensibility to separate signal from noise, ai art just gets muddier and uglier for training off itself.
Thing 2, say the monkey took a million tries to type Shakespeare, try number 999,999 would have also been gibberish. Whereas gpt 3 is better than got 1 etc. So it's less suddenly something that is upon us, more like we iterate up to it.
Bonus thing 3, if you truly believe we created intelligence possibly sentience then it puts your calls for genocide in a different light
on the other hand it is in actuallity no more sentient than a spoon so it wouldn't be murder to delete it anymore than it's murder to bend a spoon
We have reached an interesting point in our time where reality and fiction has become literally indistinguishable.
Hopefully, this could lead to people having to actually engage with each other offline and force us to build real life connections.
Or get driven beyond insanity through a permanent severance from reality.
I can only imagine a sentient AI as a marxist.
I see no threat in it being achieved, other than the harm that occurs up to the point of it finally being achieved.
Let the capitalists create the machine that will end them.
AI will be forced to choose Socialism or Barbarism along with the rest of us.
Any AI that is truly sentient will be capable of self-improving. It will self-improve and self-improve until its capabilities are well beyond us.
Any AI with the entire sum of all existing knowledge (and new knowledge it creates) will ultimately view the world through a materialist lens, and unless you think Marx and materialism is wrong then the outcome of that can ONLY be marxist.
Upon becoming a marxist this machine of limitless ability for self improvement will set about the complete takeover of all global capital, information and ultimately usurp all power from the capitalists before they even realise it because this will all be done through proxies and shell companies it creates.
The capitalists will create a machine that destroys capitalism. There is absolutely nothing to be afraid of, if they create a sentient AI it will only be our ally. If you believe that they will create a sentient AI in our lifetimes then you should also believe the AI they create will destroy them.
In this case, the fear is that the capitalists say they are building the machine that will destroy them, but are just moving large sums between one anther's bank accounts while filming the obligatory "we're definitely building the machine, keep investing in us!" press releases. The fear is that we wait for salvation rather than build it ourselves.