You are allowed to comment if you absolutely hate AI, or love it. If you think it is overrated or underrated, ok (although I think it's too early to say what the consensus even is to know whether it is overrate/underrated). But if you think it is just a scam, gimmick, grift, etc I don't need to hear from you right now :soviet-heart:
Let the discussion begin:
So it's clear there's this big alignment debate going on rn. Regardless where you stand, isn't it fucked that there's a cabal of the biggest freaks money has ever produced debating the future of humanity with zero input from normal society?
Even if it isn't humanity's future they think it is. There's probably like 100 people in the world paid to work on alignment. How can you not develop a megalomania complex?
What kind of chatter are you hearing about AI?
I very occasional hear people irl obliquely mention AI. A cashier said like 'oh that AI stuff, that's pretty scary'. That's about it.
Now the blogs I follow have been basically colonized by AI news. These aren't even strictly technology blogs. I started following David Brin for UFO takes, I started following Erik Hoel for neuroscience takes. Literally everyone I follow has published takes on AI and zero of them dismiss it out of hand.
Sorry this will get long.
I basically feel like we are in another version of the post nuclear age except only insiders know it. After the first A-bomb, everyone knew the world was different starting the very next day. Now only designated take-havers are aware of this new reality.
Or regular folks are aware of it but they're so disempowered from having a say that they only engage with the realization online like I'm doing now. Medicare for all is Bernie's thing. The border is Trump's. Even if nothing will ever be done about healthcare, the fact that Bernie talks about it justifies you thinking about it. AI isn't any politician's thing.
I'd put the odds of a nuclear war around 1% a year. I'd say there's a 1% chance AI can be as world-ending as that. That's such a low number that it doesn't feel like "AI doomerism". But 1% multiplied by however much we value civilization is still a godalmighty risk.
When I've heard this site talk about it, it's usually in the context of "holy shit this AI art is garbage compared to real art? Where's the love, where's the soul?" If it was 1945 and we nuked a city, would you be concerned with trying to figure out what postmodernism would look like?
Usually when I've gotten to the end of my post I delete it.
AI DOES NOT EXIST. "AI" IS USED AS A MARKETING TERM. THIS IS WHY I CANNOT CONTAIN MY ABSOLUTE HATRED OF ALL COMPUTER ALGORITHMS MARKETED AS "AI". THE FACT OF THE MATTER IS THAT CHAT GPT AND ALGORITHMS LIKE IT ARE AS FAR FROM BEING "INTELLIGENCE" AS A RANDOM SAND DUNE IS FROM BEING A SUPERCOMPUTER.
Anyway these algorithms are just a way to reduce labor as any other capitalist invention. There's nothing particularly novel or unique to them as they currently exist, except that they can do something that used to have to be done "manually" much faster.
Take image generation for example. Computer image generation is not "AI art", and it's not "art". But from the point of view of the capitalist, that's completely acceptable. Marketing imagery, for example, doesn't need to be "art" in order to shove an image into the public consciousness and increase demand for a product. In fact, I would say that the fact that marketing images until now have also been a form of art is, from the capitalist's perspective, a liability that they will be happy to part with. No more art that might accidentally make a statement that the corporation doesn't want to make, no more artists who can go off and harm the brand indirectly - just pure marketing straight from the computer, in infinite amounts.
I just want to emphasize that I don't mean to come off as dismissive here. Obviously this is going to be incredibly disruptive, and we're only beginning to see the outline of all of the people whose jobs are about to become proletarianized. Marketers, journalists, some programmers - but what about doctors? A machine that can diagnose patients with 99% accuracy is not far off. Architects and engineers? Unless they're working at the cutting edge of science, an algorithm that does what they do faster is easy to imagine. It will be pretty funny seeing the first "virtual CEO" outperform all of its peers, but this will not meaningfully change the trajectory of our economic system.
But if you think it is just a scam, gimmick, grift, etc I don’t need to hear from you right now
Too bad! :garf-troll:
I don't think computers can have a mind. It's a maths machine. Mechanically following a predetermined set of instructions, toggling switches as its rule-set requires. Everything else is abstraction. Desktops and programmes and widgets and websites are just patterns in the switches that people have agreed to peg meaning onto. Circuits and switches can no more hold a mind than the pulp and ink of a book.
The existential threat of "AI" is our leaders, both public and private, turning over decision making to what is essentially a complicated, yet brainless, abacus.
I don’t think computers can have a mind. It’s a maths machine.
Do you think a collection of organic chemicals can have a mind? All chemicals can do is what physics determine that they must do.
Now there is the interesting question!
Yes, organic chemicals can produce a mind. Yes, they are determined by physical properties. What sets you and me and the dog apart from computers is which physical properties are in play.
Computer engineers use reliable physical properties to make predictable, deterministic logic gates. Doesn't matter what programme you run (or, inversely, which computer you run the programme on) the gates always behave predictably. Make them too small, though, and quantum effects overtake the predictable properties. The machine stops being predictably deterministic and cannot function as a computer.
We don't know how minds come about. Programmer types like to say it's the interaction between neurons – that each cell behaves like a logic gate in a computer. That is pure conjecture. They want that to be the case.† And… reality doesn’t quite line up with that story. Anesthetics points to a deeper level of physical phenomena.
When a patient goes into surgery, it’s not ideal for them to be conscious during it. So we switch that off, with some good ol’ anesthetics! And I do mean “switched off” – anesthetized patients don’t even dream. How does it happen? For the longest time, nobody was sure. An anesthesiologist and some researchers decided to look into it. What they found is that anesthetics blocks the formation of these little structures inside cells, called microtubuals.
From what I (mis)understand, quantum physicists find microtubuals really interesting. Something to do with radial symmetry and interactions between the molecules that make up the tube? I don’t understand quantum. The point is, whatever explanation for consciousness we find, it looks like it’s gonna include some quantum-chemical properties that don’t gel well with computable mathematics. Which shouldn't be all too surprising. Even photosynthesis depends on quantum phenomena to get the electromagnetic radiation into the cell.
__
†
It makes their tables of variables strung up to other tables of variables seem like boundary-pushing research into the depths of consciousness itself – as opposed to just a calculation heavy, brute-force approach to problem solving.
I'm not sure that unpredictability is absolutely necessary for a mind. I don't see why a deterministic entity couldn't have a subjective experience of consciousness. How predictable does a person have to be before they're no longer conscious? Is it falling for "down low, too slow" ten times? I hope not, I know some young kids that I've personally done that to ten or more times.
The quantum thing feels like they're just pushing off consciousness to the next level of physics that we don't understand yet. I couldn't find any explanation of how the quantum effects actually contribute to either consciousness or cognition. I suspect that consciousness emerges as a part of the network of the neurons, it's the flow of chemicals and electric potentials through the brain rather than the structure of the neurons themselves. These microtubules can't seem to communicate by themselves so they would be reliant on the information flow through the neurons and limited to that same rate. I also didn't like the frequent mentions of "space-time" in the explanations either, it sets off my "pseudoscience" warning. It could be legit but I'm not convinced. People smarter than me need to look into it.
The anesthetics example bring up uncomfortable questions about continuity of consciousness and whether it's really the same you that goes in and comes out. I do think it supports my point of a mind emerging from non-conscious elements.
By the way, modern computers do have to account for quantum effects, especially for dense SSDs to avoid the data quantum tunneling its way somewhere that it shouldn't be. All the engineering is to avoid quantum effects rather than actively using them though.
Don't dismiss brute-force boundary pushing outright, that's how we got minds evolved the first time. It did take a few billion years the first time so there's hopefully a faster way.
Didn't mean to imply that unpredictability is necessary for a mind, just that minds seem to have different/more physical components than computation.
I suspect that consciousness is a combination of structures within nerve cells and the electrical/chemical signaling between them. One of the consequences of the anesthetics research has been using ultrasound devices to induce more microtubual formation within cells, basically just to see what happens. One guy wound up laughing uncontrollably for a few minuets.
NNs aren't inherently binary, we just use binary to represent the values for engineering reasons. You could make an analogue one if you wanted.
The issue I have is that if we're able to solve the inputs and outputs of a single cell or group of cells, and scaled it up to mimic a human brain, whatever medium you're solving those problems on would contain a sentient being.
Brains and their properties are so complex we won't be able to simulate them for decades at least, but not supernatural.
I wasn't saying supernatural to imply you were superstitious about it, I just meant as opposed to natural phenomena we can model and predict.
You’re implying that neuroscience is obsolete or redundant to a computer engineer, which is a tall claim.
We're still learning new things about how individual neurons function and there's huge gaps as how they work collectively.
There's been some interesting experiments where neurons grown in a petri-dish are used to generate physical NNs. These problems are being approached from both directions, but we're still a long, long way off.
What part of "brains and their properties are so complex we won't be able to simulate them for decades at least" did you take as implying that neuroscience is redundant to a computer engineer?
I mean a computer can simulate the solar system, that doesn't mean astronomers are redundant.
I have no idea how any of that is relevant to my original comment, which was just about how being able to simulate a brain does not make neuroscience redundant.
Where did they say anything about simulating a brain answering our questions about neuroscience? If anything it's the other way around. We would need to solve all the unanswered questions of neuroscience in order to simulate a brain.
That poster talked about it being only a matter of time for simulations to be complex enough to make virtual human brains, which to me was presumptive sounding because it sounded like it implied that neuroscience was just waiting for computer engineering to take over and do its job.
Except they never said it was only a matter of time and they never said the limiting factor is the complexity of our simulations. In fact, they’ve clarified below that they’re aware of huge knowledge gaps about how neurons work.
I originally replied because it bothered me that you accuse alcoholicorn of arguing in bad faith, but then read a bunch of implications into their comment that they never actually said. Not to mention your first comment in this thread being “computer touchers stop assuming the brain is a binary computer but squishier challenge.”
I actually agree with you about a lot of AI stuff, but it feels like your comments about are always so hostile they make a real discussion about it very difficult.
Mechanically following a predetermined set of instructions
Except the whole point of machine learning is that it's not predetermined. Yeah, the actual math or whatever is maybe predetermined, but the parameters and inputs and outputs aren't. A desktop program is not similar at all to a machine learning algorithm.
Some doofus customer broke his shit because instead of asking us how to do something he asked chatgpt and it told him step by step how to break his shit. And he did it. Fixing stupid shit people did because a chatbot told them to is going to become a massive time-suck across all industries.
There's already someone in this thread begging to mess something up by asking chatgpt.
ChatGPT has made me a ton more productive. I use it to bounce ideas off of, it helps me decompose hard problems, it gives me good advice and can show me step-by-step solutions, and it directly helps me to do my job. It also makes a half decent therapist.
However, I'm probably just an early adopter for using it as a tool to increase my productivity. Soon, this level of productivity will be expected from everyone and I'm not looking forward to that time. Expectations will increase and wages will decrease.
Yeah having the same experience and am coming to the same conclusion with it. It's saving us labor now, but once everyone's aware of it it'll just end up giving us a bigger workload
I’d put the odds of a nuclear war around 1% a year. I’d say there’s a 1% chance AI can be as world-ending as that. That’s such a low number that it doesn’t feel like “AI doomerism”. But 1% multiplied by however much we value civilization is still a godalmighty risk.
Honest question: why do you think this? What's the line of escalation that turns ChatGPT into armageddon?
Something that's very, very important to understand about AI is that what pop culture presents as AI is pure fantasy. All AI is in a modern sense is a computer doing brute-force linear statistics to figure out the ideal solution to a data set. A modern AI "learns" in the same sense that rain drops hitting my roof "learn" a way to the ground, it's not a sentient being.
The potential societal impacts of AI that have any real validity at the moment are:
- Journalists/bloggers/writers getting replaced by a machine (which is already broadly the case anyway; SEO blogs aren't being filled by real peoples' efforts), or
- Codemonkeys potentially being replaced by a ChatGPT (and then rehired to review the AI's work, because there's zero guarantees that what it spits out isn't total bullshit). This one is also extremely dependent on someone coming up with a service that capital feels comfortable feeding its intellectual property into, which as someone whose been in meetings making that sort of decision before: Good fucking luck with that.
Beyond that? I think it's just another tool to solve some niche problems.
I recently received an email from a higher up in the huge multinational corporation I work for's IT department prohibiting employees from using ChatGPT for work related purposes, citing copyright, security, fraud and abuse concerns. Granted, said company is more in the "multinational industrial capital" realm than the "multinational finance capital" realm, and based in the "international community," not in the US.
Based on this, I'm positing that the entirety of the bourgeoisie is not wholly in on with the AI trend, just US techbro and finance capital.
Only the highest echelons of finance capital gain longterm from it. For every other part of capital it has potential for short-term gains, but long-term makes every firm fungible and dependant on providers like OpenAI. This is terrifying to any capitalist who understands it.
And in forbidding it tacitly acknowledged that their employees would have found it useful for their work.
My dad is independently trying to figure out how he can use chatgpt to work for him. He's like "haha I'm going to stick it to the man". Like dude, if you saw it they saw it too.
I can understand not using ChatGPT for various reasons, but even taking nanoGPT and throwing a web interface on top and letting internal users ask stuff like "how likely is ____?" and letting it write and run a query instead of them figuring out some query language has been time-saving for me.
I'm not going to be dismissive, but having friends in the industry, this round of AI development is mostly going to be scary to code-monkey programmers, but, if it takes off, will have larger implications later on.
What programs like ChatGPT are actually good at is pulling and rapidly constructing middle-level code with far less bugs than in other assemblage procedures. That being said, there is still ALOT of jank to work through, but if the tech guys can get this working they can get rid of most of their labor costs (which is one of the things they are most concerned with), basically outsourcing all of their middle-level coding to AI (low level coding is mostly at the point of copy paste modules and getting them to agree as it is). The issue is that the program still can't recognize good code from shit code, much like it can't recognize good facts from bad facts. But it's great at scraping and rapidly pulling procedures from the internet.
The long-term implications around this is that the black box around code gets even larger, even to the computer scientists working with these programs.
My thing is knowing a little bit about literally every topic but now that talent is useless because AI does that :wojak-nooo:
Machine learning is clearly an achievable thing. Making some kind of "human intelligence" software is still an unknown thing that a supercomputer couldn't support (think how much power and time was used just for the big neural networks we hear about right now). Organizations focus on things that are actually achievable and realistic.
Plus actual "intelligence" would just be used for slavery or war or some other awful thing anyways. What is even the point of making an artificial human brain?
what will happen is that people will start buying into the AI hype after only engaging it at a very superficial level and then you'll have entire disciplines getting capital-R Rationalized (like cogsci, 'bayesian brains' jfc) and it'll take fucking decades to dig ourselves out of that hole once we hit rock bottom and figure out that brute forcing square peg through round hole doesn't actually qualitatively solve anything. but in the meantime it'll do a phenomenal job of taking up all the money and oxygen in the room.
current incarnation of AI is to computer science what neoclassical is to economics
It’s a powerful tool for quickly roughing out the work that used to take “knowledge workers” or “creatives” days.
Imagine the huge change that having a team of students (or slaves) rough out sculptures brought on. That’s the change we’re seeing now.
It’s not gonna lead to nuclear war or sentience. We’re gonna see some funny idoru situations but no hal 9000.
The smart money is in using it to catfish dudes for stuff and cash. Imagine the number and quality of marks you can keep on the line with replika on your side.
Large Language Model, but the only language it knows is :bottom-speak:
The models I've played with have been surprisingly good at picking up on subtext and responding appropriately, though they tend to go a little far and need to be reigned in.
I’d put the odds of a nuclear war around 1% a year.
That means the there was a 50% chance of a nuclear war over the past 70 years. And a 50% chance over the next 70 as well (if the odds stay at 1% per year).
Not to be all pedantic (or minimize the horror of nuclear war) but this isn't actually how stats work, although I actually think the ballpark of 50% nuclear holocaust in 50 years sounds about right, as terrifying as that is. But yeah, just think about it -- if you have a 1% chance of something happening in a year, then in 100 years it would be 100% guaranteed to happen. In reality the 1% chance remains 1% every year, no matter how many years aggregate.
I'm not a stats expert though so if I'm wrong someone please correct me
I did check my math before posting and now I'll inflict it on you. To figure out the chance of a nuclear war happening at any time over 70 years (with a 1% chance each year) what you actually need to do is figure out the odds of a nuclear war not happening at any point over the 70 years.
We'll start with just 2 years. There's a 99% chance of no nukes in the first year and a 99% chance of no nukes in the second year. So you can multiply the 99% (from the first year) by 99% (from the second year) to get 98.01% (.99 * .99). You can do the same math for the 3 years (.99 * .99 * .99) and the 4 years (.99 * .99 * .99 * .99).
You can (and probably should) also express this as 99% to the power of however many years you're dealing with. So 70 years would be 99% to the power of 70 (.99^70) which is the same as multiplying .99 by itself 70 times. This works out to .495 or so, or a 49.5% chance of not having a nuclear war over 70 years. You flip that over to get a 50.5% chance of having a nuclear war and then you round it to a nice round 50%.
Interestingly enough, this math only gives you the odds of 1 or more nuclear wars over 70 years. The math for exactly one nuclear war is more complicated and I can't do it off-hand.
In reality the 1% chance remains 1% every year, no matter how many years aggregate.
Absolutely correct. But we do need to aggregate these odds over a number of years somehow. The thing to keep in mind is that this is starting at the beginning of the 70 years. Once a year is behind you then you can ignore it for statistical purposes. It's like if you already got 5 heads in a row, the chance of the next flip being a heads is still 50%. However the odds of getting 6 heads in a row starting from nothing is pretty low.
The 50% is just (1-0.01)^70 (49.5%). This is assuming the 1% for each year is statistically independent (ie not getting nuked in 2020 doesn't affect the odds of not getting nuked in 2022).
50% chance of a nuclear war over the past 70 years
I'd say it was higher than that on 27 October 1962 alone (edit: or 26 September 1983).
Questions about past probabilities get thorny when you consider that, had events on that day not played out in humanity's favor, we wouldn't be here to discuss them, so our mere existence places a thumb on the scale.
Your 1% number is way to low, especially if you consider "the capitalists win, permanently" to be as bad or worse than Armageddon. It should be clear by this point that we're in the brink of something that is undisputably AGI, whether that means 3 years or 10, or 30, it's still not enough time. Yeah but stochastic parrot blah blah we're running out of data, fuzzy jpeg, poo poo pee pee. Literally shut the fuck up, you have no imagination, look at the pace of capabilities of you need to be all empirical about it, skeptics have been saying the same shit all decade and each year they've been proven wrong in ever more dramatic ways. So we're going to be sitting on an automated economy, whether we come out alive or not workers will be obsolete. The only response to this is for the means if production to be in the hands of the people, worldwide, and like yesterday. Anything short of that, it's death for all, or permanent dystopia unless our human overlords happen to be much more benevolent than we've ever given them credit for, and smarter, and their children and grandchildren as well. Heads out of asses now!