You are allowed to comment if you absolutely hate AI, or love it. If you think it is overrated or underrated, ok (although I think it's too early to say what the consensus even is to know whether it is overrate/underrated). But if you think it is just a scam, gimmick, grift, etc I don't need to hear from you right now :soviet-heart:
Let the discussion begin:
So it's clear there's this big alignment debate going on rn. Regardless where you stand, isn't it fucked that there's a cabal of the biggest freaks money has ever produced debating the future of humanity with zero input from normal society?
Even if it isn't humanity's future they think it is. There's probably like 100 people in the world paid to work on alignment. How can you not develop a megalomania complex?
What kind of chatter are you hearing about AI?
I very occasional hear people irl obliquely mention AI. A cashier said like 'oh that AI stuff, that's pretty scary'. That's about it.
Now the blogs I follow have been basically colonized by AI news. These aren't even strictly technology blogs. I started following David Brin for UFO takes, I started following Erik Hoel for neuroscience takes. Literally everyone I follow has published takes on AI and zero of them dismiss it out of hand.
Sorry this will get long.
I basically feel like we are in another version of the post nuclear age except only insiders know it. After the first A-bomb, everyone knew the world was different starting the very next day. Now only designated take-havers are aware of this new reality.
Or regular folks are aware of it but they're so disempowered from having a say that they only engage with the realization online like I'm doing now. Medicare for all is Bernie's thing. The border is Trump's. Even if nothing will ever be done about healthcare, the fact that Bernie talks about it justifies you thinking about it. AI isn't any politician's thing.
I'd put the odds of a nuclear war around 1% a year. I'd say there's a 1% chance AI can be as world-ending as that. That's such a low number that it doesn't feel like "AI doomerism". But 1% multiplied by however much we value civilization is still a godalmighty risk.
When I've heard this site talk about it, it's usually in the context of "holy shit this AI art is garbage compared to real art? Where's the love, where's the soul?" If it was 1945 and we nuked a city, would you be concerned with trying to figure out what postmodernism would look like?
Usually when I've gotten to the end of my post I delete it.
AI DOES NOT EXIST. "AI" IS USED AS A MARKETING TERM. THIS IS WHY I CANNOT CONTAIN MY ABSOLUTE HATRED OF ALL COMPUTER ALGORITHMS MARKETED AS "AI". THE FACT OF THE MATTER IS THAT CHAT GPT AND ALGORITHMS LIKE IT ARE AS FAR FROM BEING "INTELLIGENCE" AS A RANDOM SAND DUNE IS FROM BEING A SUPERCOMPUTER.
Anyway these algorithms are just a way to reduce labor as any other capitalist invention. There's nothing particularly novel or unique to them as they currently exist, except that they can do something that used to have to be done "manually" much faster.
Take image generation for example. Computer image generation is not "AI art", and it's not "art". But from the point of view of the capitalist, that's completely acceptable. Marketing imagery, for example, doesn't need to be "art" in order to shove an image into the public consciousness and increase demand for a product. In fact, I would say that the fact that marketing images until now have also been a form of art is, from the capitalist's perspective, a liability that they will be happy to part with. No more art that might accidentally make a statement that the corporation doesn't want to make, no more artists who can go off and harm the brand indirectly - just pure marketing straight from the computer, in infinite amounts.
I just want to emphasize that I don't mean to come off as dismissive here. Obviously this is going to be incredibly disruptive, and we're only beginning to see the outline of all of the people whose jobs are about to become proletarianized. Marketers, journalists, some programmers - but what about doctors? A machine that can diagnose patients with 99% accuracy is not far off. Architects and engineers? Unless they're working at the cutting edge of science, an algorithm that does what they do faster is easy to imagine. It will be pretty funny seeing the first "virtual CEO" outperform all of its peers, but this will not meaningfully change the trajectory of our economic system.
"AI" as a marketing term is an incredibly successful lie.
Look in this thread, even. For some it's easier to deconstruct human intelligence than it is to stop trying to elevate the chatbot. :sadness:
But if you think it is just a scam, gimmick, grift, etc I don’t need to hear from you right now
Too bad! :garf-troll:
I don't think computers can have a mind. It's a maths machine. Mechanically following a predetermined set of instructions, toggling switches as its rule-set requires. Everything else is abstraction. Desktops and programmes and widgets and websites are just patterns in the switches that people have agreed to peg meaning onto. Circuits and switches can no more hold a mind than the pulp and ink of a book.
The existential threat of "AI" is our leaders, both public and private, turning over decision making to what is essentially a complicated, yet brainless, abacus.
I don’t think computers can have a mind. It’s a maths machine.
Do you think a collection of organic chemicals can have a mind? All chemicals can do is what physics determine that they must do.
Now there is the interesting question!
Yes, organic chemicals can produce a mind. Yes, they are determined by physical properties. What sets you and me and the dog apart from computers is which physical properties are in play.
Computer engineers use reliable physical properties to make predictable, deterministic logic gates. Doesn't matter what programme you run (or, inversely, which computer you run the programme on) the gates always behave predictably. Make them too small, though, and quantum effects overtake the predictable properties. The machine stops being predictably deterministic and cannot function as a computer.
We don't know how minds come about. Programmer types like to say it's the interaction between neurons – that each cell behaves like a logic gate in a computer. That is pure conjecture. They want that to be the case.† And… reality doesn’t quite line up with that story. Anesthetics points to a deeper level of physical phenomena.
When a patient goes into surgery, it’s not ideal for them to be conscious during it. So we switch that off, with some good ol’ anesthetics! And I do mean “switched off” – anesthetized patients don’t even dream. How does it happen? For the longest time, nobody was sure. An anesthesiologist and some researchers decided to look into it. What they found is that anesthetics blocks the formation of these little structures inside cells, called microtubuals.
From what I (mis)understand, quantum physicists find microtubuals really interesting. Something to do with radial symmetry and interactions between the molecules that make up the tube? I don’t understand quantum. The point is, whatever explanation for consciousness we find, it looks like it’s gonna include some quantum-chemical properties that don’t gel well with computable mathematics. Which shouldn't be all too surprising. Even photosynthesis depends on quantum phenomena to get the electromagnetic radiation into the cell.
__
†
It makes their tables of variables strung up to other tables of variables seem like boundary-pushing research into the depths of consciousness itself – as opposed to just a calculation heavy, brute-force approach to problem solving.
I'm not sure that unpredictability is absolutely necessary for a mind. I don't see why a deterministic entity couldn't have a subjective experience of consciousness. How predictable does a person have to be before they're no longer conscious? Is it falling for "down low, too slow" ten times? I hope not, I know some young kids that I've personally done that to ten or more times.
The quantum thing feels like they're just pushing off consciousness to the next level of physics that we don't understand yet. I couldn't find any explanation of how the quantum effects actually contribute to either consciousness or cognition. I suspect that consciousness emerges as a part of the network of the neurons, it's the flow of chemicals and electric potentials through the brain rather than the structure of the neurons themselves. These microtubules can't seem to communicate by themselves so they would be reliant on the information flow through the neurons and limited to that same rate. I also didn't like the frequent mentions of "space-time" in the explanations either, it sets off my "pseudoscience" warning. It could be legit but I'm not convinced. People smarter than me need to look into it.
The anesthetics example bring up uncomfortable questions about continuity of consciousness and whether it's really the same you that goes in and comes out. I do think it supports my point of a mind emerging from non-conscious elements.
By the way, modern computers do have to account for quantum effects, especially for dense SSDs to avoid the data quantum tunneling its way somewhere that it shouldn't be. All the engineering is to avoid quantum effects rather than actively using them though.
Don't dismiss brute-force boundary pushing outright, that's how we got minds evolved the first time. It did take a few billion years the first time so there's hopefully a faster way.
Binary computing being refined with "brute force" toward a human brain is like a blacksmith "brute forcing" a blade so sharp that it can cut the sky open.
I'm not saying true intelligence by artificial means is impossible, but the current method isn't it, even if it's an impressive Mechanical Turk (and uses human assistance in much of its vaunted output, much like the Mechanical Turk in history did).
Denigrating human intelligence just to elevate what marketing calls "AI" only helps :porky-happy: .
Didn't mean to imply that unpredictability is necessary for a mind, just that minds seem to have different/more physical components than computation.
I suspect that consciousness is a combination of structures within nerve cells and the electrical/chemical signaling between them. One of the consequences of the anesthetics research has been using ultrasound devices to induce more microtubual formation within cells, basically just to see what happens. One guy wound up laughing uncontrollably for a few minuets.
A lot of techbros don't seem aware of how much of what constitutes the human nervous system is not in the brain, either. The stomach alone has a lot of such material.
Computer touchers stop assuming a human brain is a binary computer except squishier challenge.
NNs aren't inherently binary, we just use binary to represent the values for engineering reasons. You could make an analogue one if you wanted.
I was using a simplified summary.
Again, stop denigrating human brains (or non-human animal brains for that matter) in an attempt to elevate chatbots. It only serves :porky-happy: and their marketing hype while further dehumanizing the rest of us.
The issue I have is that if we're able to solve the inputs and outputs of a single cell or group of cells, and scaled it up to mimic a human brain, whatever medium you're solving those problems on would contain a sentient being.
Brains and their properties are so complex we won't be able to simulate them for decades at least, but not supernatural.
but not supernatural.
If you're going there, you're not arguing in good faith with me in the first place.
I could just as quickly tell you that reductionistic statements about human neurology (something that is still not fully understood and is still being explored and experimented with to this day) that assume brain = computer are arrogant and sound a lot more like "someone with a hammer sees everything else as a nail" than an actually comprehensive understanding of the brain. You're implying that neuroscience is obsolete or redundant to a computer engineer, which is a tall claim.
EDIT: I removed an incendiary thing I originally put in in response to the "supernatural" snark.
I wasn't saying supernatural to imply you were superstitious about it, I just meant as opposed to natural phenomena we can model and predict.
You’re implying that neuroscience is obsolete or redundant to a computer engineer, which is a tall claim.
We're still learning new things about how individual neurons function and there's huge gaps as how they work collectively.
There's been some interesting experiments where neurons grown in a petri-dish are used to generate physical NNs. These problems are being approached from both directions, but we're still a long, long way off.
What part of "brains and their properties are so complex we won't be able to simulate them for decades at least" did you take as implying that neuroscience is redundant to a computer engineer?
In decades at least, the assumption is that it will be redundant, then.
I mean a computer can simulate the solar system, that doesn't mean astronomers are redundant.
A computer doesn't necessarily (or likely will in the forseeable future) account for every atom and subatomic particle and their intersectional interaction with each other when simulating that solar system, and also for that matter likely doesn't simulate anyone's typical workday while commuting on the third planet from the sun.
I think you missed my point.
Simulations of things aren't necessarily the full sum of the thing being simulated. There are limitations, and one of them is that computers themselves exist in the same physical material universe as the things they are simulating. The sheer amount of computational power to account for everything is, for the forseeable future so far, insurmountable, but many things short of that get a pass.
I have no idea how any of that is relevant to my original comment, which was just about how being able to simulate a brain does not make neuroscience redundant.
I was going back to the post before that, before you came in.
Brains and their properties are so complex we won’t be able to simulate them for decades at least, but not supernatural.
I disagree with the universal utility of computer simulations, and the implication of "mimicing a human brain" as if to say it will solve all the unanswered questions of neuroscience simply by turning the simulation on.
Where did they say anything about simulating a brain answering our questions about neuroscience? If anything it's the other way around. We would need to solve all the unanswered questions of neuroscience in order to simulate a brain.
I may have read the implications differently than you did. In fact, I still do.
We would need to solve all the unanswered questions of neuroscience in order to simulate a brain.
That's actually what I was going for. That poster talked about it being only a matter of time for simulations to be complex enough to make virtual human brains, which to me was presumptive sounding because it sounded like it implied that neuroscience was just waiting for computer engineering to take over and do its job.
Tying that back to astronomy, same deal. It seemed like the assumption that a sufficiently advanced simulation could be waited for to discover everything about distant solar systems without actually using probes or telescopes or gathering further data on the actual thing.
That poster talked about it being only a matter of time for simulations to be complex enough to make virtual human brains, which to me was presumptive sounding because it sounded like it implied that neuroscience was just waiting for computer engineering to take over and do its job.
Except they never said it was only a matter of time and they never said the limiting factor is the complexity of our simulations. In fact, they’ve clarified below that they’re aware of huge knowledge gaps about how neurons work.
I originally replied because it bothered me that you accuse alcoholicorn of arguing in bad faith, but then read a bunch of implications into their comment that they never actually said. Not to mention your first comment in this thread being “computer touchers stop assuming the brain is a binary computer but squishier challenge.”
I actually agree with you about a lot of AI stuff, but it feels like your comments about are always so hostile they make a real discussion about it very difficult.
but it feels like your comments about are always so hostile
That's a subjective thing and from my side I predictably disagree with it as an "always" thing. Still, when you came in, since you started it this way:
What part of “brains and their properties are so complex we won’t be able to simulate them for decades at least” did you take as implying that neuroscience is redundant to a computer engineer?
I actually :took-restraint: here because the way that question was framed sounded in bad faith from the start to me.
EDIT: Added more below.
Not to mention your first comment in this thread being “computer touchers stop assuming the brain is a binary computer but squishier challenge.”
Yeah, I will own up to coming in with a chip on my shoulder because I really didn't like this response before I commented.
Do you think a collection of organic chemicals can have a mind? All chemicals can do is what physics determine that they must do.
Even ignoring the hard determinism struggle session potential of that it was what I was primarily summarizing, which was said in reaction to an argument about chatbot intelligence or lack thereof.
I am once again asking you to stop dehumanizing people in an attempt to elevate chatbots. :bernie-pout:
if only literally anyone was doing that!!
Do you think a collection of organic chemicals can have a mind? All chemicals can do is what physics determine that they must do.
Your ongoing grudge is noted. Tiresome, but noted.
Just post your usual :cringe: face at my posts and move on.
Everything I post is apparently no good to you. Nothing new there. :wall-talk:
Mechanically following a predetermined set of instructions
Except the whole point of machine learning is that it's not predetermined. Yeah, the actual math or whatever is maybe predetermined, but the parameters and inputs and outputs aren't. A desktop program is not similar at all to a machine learning algorithm.
Some doofus customer broke his shit because instead of asking us how to do something he asked chatgpt and it told him step by step how to break his shit. And he did it. Fixing stupid shit people did because a chatbot told them to is going to become a massive time-suck across all industries.
There's already someone in this thread begging to mess something up by asking chatgpt.
ChatGPT has made me a ton more productive. I use it to bounce ideas off of, it helps me decompose hard problems, it gives me good advice and can show me step-by-step solutions, and it directly helps me to do my job. It also makes a half decent therapist.
However, I'm probably just an early adopter for using it as a tool to increase my productivity. Soon, this level of productivity will be expected from everyone and I'm not looking forward to that time. Expectations will increase and wages will decrease.
However, I’m probably just an early adopter for using it as a tool to increase my productivity. Soon, this level of productivity will be expected from everyone and I’m not looking forward to that time. Expectations will increase and wages will decrease.
Thank you. We're in the early Google stages, where the product is nice enough to use where it doesn't seem like a problem yet.
Yeah having the same experience and am coming to the same conclusion with it. It's saving us labor now, but once everyone's aware of it it'll just end up giving us a bigger workload
I’d put the odds of a nuclear war around 1% a year. I’d say there’s a 1% chance AI can be as world-ending as that. That’s such a low number that it doesn’t feel like “AI doomerism”. But 1% multiplied by however much we value civilization is still a godalmighty risk.
Honest question: why do you think this? What's the line of escalation that turns ChatGPT into armageddon?
Something that's very, very important to understand about AI is that what pop culture presents as AI is pure fantasy. All AI is in a modern sense is a computer doing brute-force linear statistics to figure out the ideal solution to a data set. A modern AI "learns" in the same sense that rain drops hitting my roof "learn" a way to the ground, it's not a sentient being.
The potential societal impacts of AI that have any real validity at the moment are:
- Journalists/bloggers/writers getting replaced by a machine (which is already broadly the case anyway; SEO blogs aren't being filled by real peoples' efforts), or
- Codemonkeys potentially being replaced by a ChatGPT (and then rehired to review the AI's work, because there's zero guarantees that what it spits out isn't total bullshit). This one is also extremely dependent on someone coming up with a service that capital feels comfortable feeding its intellectual property into, which as someone whose been in meetings making that sort of decision before: Good fucking luck with that.
Beyond that? I think it's just another tool to solve some niche problems.
I recently received an email from a higher up in the huge multinational corporation I work for's IT department prohibiting employees from using ChatGPT for work related purposes, citing copyright, security, fraud and abuse concerns. Granted, said company is more in the "multinational industrial capital" realm than the "multinational finance capital" realm, and based in the "international community," not in the US.
Based on this, I'm positing that the entirety of the bourgeoisie is not wholly in on with the AI trend, just US techbro and finance capital.
Only the highest echelons of finance capital gain longterm from it. For every other part of capital it has potential for short-term gains, but long-term makes every firm fungible and dependant on providers like OpenAI. This is terrifying to any capitalist who understands it.
And in forbidding it tacitly acknowledged that their employees would have found it useful for their work.
My dad is independently trying to figure out how he can use chatgpt to work for him. He's like "haha I'm going to stick it to the man". Like dude, if you saw it they saw it too.
I can understand not using ChatGPT for various reasons, but even taking nanoGPT and throwing a web interface on top and letting internal users ask stuff like "how likely is ____?" and letting it write and run a query instead of them figuring out some query language has been time-saving for me.
I'm not going to be dismissive, but having friends in the industry, this round of AI development is mostly going to be scary to code-monkey programmers, but, if it takes off, will have larger implications later on.
What programs like ChatGPT are actually good at is pulling and rapidly constructing middle-level code with far less bugs than in other assemblage procedures. That being said, there is still ALOT of jank to work through, but if the tech guys can get this working they can get rid of most of their labor costs (which is one of the things they are most concerned with), basically outsourcing all of their middle-level coding to AI (low level coding is mostly at the point of copy paste modules and getting them to agree as it is). The issue is that the program still can't recognize good code from shit code, much like it can't recognize good facts from bad facts. But it's great at scraping and rapidly pulling procedures from the internet.
The long-term implications around this is that the black box around code gets even larger, even to the computer scientists working with these programs.
So what we’re going to end up getting is a bunch of very powerful math solvers with none of the intelligence embedded in it. It cannot reason. It cannot solve problem with creativity and innovation. It cannot think outside the box. It doesn’t understand causality. It cannot replace real human professional in making critical decisions, for example, performing medical diagnosis. And above all, it is prone to error and cannot use reasoning to self-correct.
Maybe that's for the best with :porky-happy: in command of technology.
Still, its irritating how the "AI" marketing label has stuck so successfully. It makes a lot of people really, really "ITS HAPPENING" about the term.
Machine learning is clearly an achievable thing. Making some kind of "human intelligence" software is still an unknown thing that a supercomputer couldn't support (think how much power and time was used just for the big neural networks we hear about right now). Organizations focus on things that are actually achievable and realistic.
Plus actual "intelligence" would just be used for slavery or war or some other awful thing anyways. What is even the point of making an artificial human brain?
What is even the point of making an artificial human brain?
:porky-happy: wants exceptionally efficient and powerful slaves that are "friendly" and by that they mean unable to fight back.
what will happen is that people will start buying into the AI hype after only engaging it at a very superficial level and then you'll have entire disciplines getting capital-R Rationalized (like cogsci, 'bayesian brains' jfc) and it'll take fucking decades to dig ourselves out of that hole once we hit rock bottom and figure out that brute forcing square peg through round hole doesn't actually qualitatively solve anything. but in the meantime it'll do a phenomenal job of taking up all the money and oxygen in the room.
current incarnation of AI is to computer science what neoclassical is to economics
My thing is knowing a little bit about literally every topic but now that talent is useless because AI does that :wojak-nooo:
isn’t it fucked that there’s a cabal of the biggest freaks money has ever produced debating the future of humanity with zero input from normal society?
Yes, and it's fucked how many people claim it will be liberating while ignoring or dismissing who owns this shit and who commands its use ( :porky-happy: ), usually under "fuck you, got mine" statements, or even "the people driven out of work aren't real workers because I don't see a hard hat" nazbol talking points with a thin veneer of Materialism(tm) so it can still be called a leftist take.
It’s a powerful tool for quickly roughing out the work that used to take “knowledge workers” or “creatives” days.
Imagine the huge change that having a team of students (or slaves) rough out sculptures brought on. That’s the change we’re seeing now.
It’s not gonna lead to nuclear war or sentience. We’re gonna see some funny idoru situations but no hal 9000.
The smart money is in using it to catfish dudes for stuff and cash. Imagine the number and quality of marks you can keep on the line with replika on your side.
Large Language Model, but the only language it knows is :bottom-speak:
The models I've played with have been surprisingly good at picking up on subtext and responding appropriately, though they tend to go a little far and need to be reigned in.
I'm still in the camp that's it's mostly going to end up being a tool that workers in creative and tech fields use that'll effectively just increase the amount of work they have to get out, but easing the challenge of it at the same time.
So mostly a neutral effect on society, since easing the burden of workers will be counteracted by the greed of the capitalist.
On the fake news front and conspiracy, I waffle back and forth between this might be world ending if advanced enough and the "well they bluntly lie about things without proof as it is, why would this matter that much?"
I'm not a dismissive though, I fully think it will fundamentally change the way a lot of jobs operate going into the future.
since easing the burden of workers will be counteracted by the greed of the capitalist.
:astronaut-2: :astronaut-1:
In the 1950s, new technology was presented and marketed as being such labor savers that everyone would have a 10 hour workweek.
It's gonna make the slop coming out of the entertainment industry even worse for sure lol
I seriously doubt it gets used for production, but yeah it'll definitely have a place in post.
I'm not in the industry or anything, but I feel like it's going to be in a lot of concept art (because that's essentially what a lot of the image networks create) and "design" stuff if it isn't already. Not in CGI, at least not in a negative lazy way.
I think people will use it for things that feel somewhat repetitive, and then not realize that it's making them produce more repetitive stuff. Imagine using it to assist in drawing something, and it produces the same color pallet over and over again without you realizing. Suddenly, you have a bunch of art which is subtly similar. People will start noticing subtle hallmarks of machine learning assisted art in stuff in advertisements or other things like that. But there will be more content for people to consume or at least it will be cheaper for the companies paying to produce it.
Art directors are supposed to stop a lot of this from happening already though
No you're good. I think there'll still be plenty of slop all over the place, just id expect something more akin to really low quality YouTube chud propaganda than it meddling too hard with good studios.
Like the ones that are already producing shit work like the DC studios overwork people and get crap as a result, idk they might try to turn harder towards AI to lower costs. I doubt they'll get a lot of audience from it but it's possible that the quality difference between machine and burnt out underpaid artists might not be that noticeable.
Basically the people already phoning it in will continue to.
Your 1% number is way to low, especially if you consider "the capitalists win, permanently" to be as bad or worse than Armageddon. It should be clear by this point that we're in the brink of something that is undisputably AGI, whether that means 3 years or 10, or 30, it's still not enough time. Yeah but stochastic parrot blah blah we're running out of data, fuzzy jpeg, poo poo pee pee. Literally shut the fuck up, you have no imagination, look at the pace of capabilities of you need to be all empirical about it, skeptics have been saying the same shit all decade and each year they've been proven wrong in ever more dramatic ways. So we're going to be sitting on an automated economy, whether we come out alive or not workers will be obsolete. The only response to this is for the means if production to be in the hands of the people, worldwide, and like yesterday. Anything short of that, it's death for all, or permanent dystopia unless our human overlords happen to be much more benevolent than we've ever given them credit for, and smarter, and their children and grandchildren as well. Heads out of asses now!