"The world has changed forever... is the name of another Medium article I'm writing" :tito-laugh:
"Everything I normally outsource to Fiverr, I now outsource to ChatGPT 4"
There are two people in my life who are completely pilled on this singularity shit and cannot understand why I don't care about it at all. One of them suggested I'm going to be out of a job in ten years lol (I won't be).
I am missing anything important by ignoring all this stuff? To me it just seems like finally there's a technology that threatens the livelihood of the intellectual class instead of regular grunt workers so now there needs to be a big hullabaloo.
It doesn't even threaten intellectual workers as it constantly makes mistakes and will generate solutions that look right but are actually very incorrect (which makes the errors much harder to find if you've fired all your intellectual workers).
What it can (and is currently) easily replace is simple middle management work, and simple clerical/recruitment work. Stuff like summarizing documents, writing copy, writing emails, responding to employment inquiry, etc.
The mass layoffs we saw earlier this year from the tech giants weren't programmers or engineers, it was all middle managers and recruiters.
To me it just seems like finally there’s a technology that threatens the livelihood of the intellectual class instead of regular grunt workers so now there needs to be a big hullabaloo.
It bothers me so much that having work replaced by machines has become a bad thing in any context for any reason. That's supposed to be POGGERS not PogO. In my mind it just highlights a shitty part about capitalism. Oh, we don't need you to organize the spreadsheet? Neat, go home and enjoy your time :)
Better yet, “You no longer need to worry about meeting the deadline for this extremely specific deliverable, we have the infrastructure in place to automate this. Go home and enjoy your time :)”
Instead it’s, “you better meet the damn deadline for this deliverable regardless of what life circumstances might have caused a delay”
One of them suggested I’m going to be out of a job in ten years lol (I won’t be).
lmao ten years? Talk about a head start
To me it just seems like finally there’s a technology that threatens the livelihood of the intellectual class instead of regular grunt workers so now there needs to be a big hullabaloo.
honestly i think that's pretty significant for a few reasons. Not because those people are more important, but because they're more privileged. a few points swimming around in the old noggin right now:
- I remember learning in psychology class many years ago that people can usually mentally adjust to a slow decline in their quality of life, but they suffer a lot more trauma and react more extremely at a rapid decrease in their quality of life.
- working class people in the USA have been treated like shit for decades and mostly just put up with it because there is not enough revolutionary political organization.
- petit bourgeois , white collar workers, highly paid professionals like lawyers and doctors, labor aristocrats, for lack of a better term, the "shrinking middle class," (as American media likes to call them) will violently react when their way of life is threatened.
- AI is threatening to turn these groups into "low paid" "low skill" "blue collar" regular proles. (i know these terms are problematic for various reasons, but there you have it)
- That would potentially result in a rapid decrease in their quality of life, especially if their employers lay them off or don't find them some adjacent work to pivot them onto
- it would swell the ranks of the proletariat with newly disenfranchised people who, owing to their privilege, are educated and resourceful, and, owing to their rapid decrease in quality of life are angry and bitter
I think there's more to it than that (as always), but I lack the energy to put it into words right now
Nah, most singularity-type stuff is just some hand-waving about accelerating progress therefore some hand-waving about the future being this unknowably advanced techno future that confirms the writer's biases. But there's not some root underlying acceleration-ness to technological progress, just a whole lot of hard work and the occasional breakthrough that causes a burst of rapid results.
There's a very specific threshold of AI advancement that's singularity-like - if someone builds AI models that are capable of understanding how their AI works and coming up with novel improvements. That would be incredibly dangerous and probably will destroy the world within a couple years. That's also not happening any time soon - ChatGPT doesn't understand anything it says, only what words appear in what order. So you should insult anyone trying to make one of those.
if someone builds AI models that are capable of understanding how their AI works and coming up with novel improvements. That would be incredibly dangerous and probably will destroy the world within a couple years.
Yeah this is what one of them is afraid of, and they're very concerned about "AI security". OOC why do you say that's not happening soon, and in what way is such an AI an actual threat to humanity?
Not happening soon - Kind of hard to explain without really getting into how things like ChatGPT work. The real reason I'm confident about this is that I sat through learning how LLMs work (best explanation I've seen, if you're already technically inclined) and there's nothing inside it that can reason. But some easy arguments are that you can't get ChatGPT to output a novel idea that's not just a combination of two ideas, that the increased size = more performance scaling regime has leveled out pretty hard, and that OpenAI has already given up on scaling that way.
Genuine threat - This comes in two parts, capability and amorality.
Capability - We have no reason to believe that human-level intelligence is some sort of fundamental cap. If an AI is capable of performing novel AI research to a good enough level to build a better AI, that better AI will be able to improve on the original design more than the first. This lets someone build a feedback loop of better and better AIs running faster and faster. We don't have any idea what the limits of these things are, but because human intelligence is probably not some sort of cap, it's presumably a lot.
Amorality - Despite being "smarter" than humans, the goals of any such AI will be whatever is programmed into the software. Doing things people would actually want is a very specific goal, which requires understanding morality (which we don't), understanding concepts like what a person is (nobody knows how to make an AI that knows the difference between a person and a description of a person), and not having any bugs in the goal function (oh no). Even if the AI is smart enough to understand that its goal function is buggy, it's goal will still be to do the thing specified by the buggy function, so it's not like it's going to fix itself. Any goal that does not specifically value people and lives (which are very specific things we don't know how to specify) would prefer to disassemble us so it can use our atoms for something it actually cares about.
Optimism - The current trajectory of AI research is to pump a ton of money into chasing capabilities that the current state of the art won't be able to reach, oversaturate a small market, and poison people's perceptions of AI capabilities for a generation. This has happened before and I think it will happen again. This will give people a lot more time to figure out those morality problems, if climate change doesn't kill us first.
Capability - We have no reason to believe that human-level intelligence is some sort of fundamental cap. If an AI is capable of performing novel AI research to a good enough level to build a better AI, that better AI will be able to improve on the original design more than the first. This lets someone build a feedback loop of better and better AIs running faster and faster. We don’t have any idea what the limits of these things are, but because human intelligence is probably not some sort of cap, it’s presumably a lot.
This is the part I don't get. Where does the threat to humanity part come in? Like how is it supposed to act out its immorality?
I liken it some kind of super-human that is able to parse the language and find relevant patterns in all of the books of an extremely large library
Could be totally wrong, but that’s how I see it. Also kinda a cool way to look at it imo
It’s just a bunch of dumb shits who are unable to view reality as anything more than a Hollywood movie. Ignore them
I would just say, judging the current state of AI by ChatGPT is a bit like judging the speed of a cheetah by using it as a pack animal. ChatGPT is a GPT model trained broadly to be useful to the largest audience possible. The major companies using these GPT models on their back end are fine tuning them with their own data for their own specific use cases. They can do things like train the models to have very specific writing styles beyond the generic "AI feel", give the models access to vectorized databases of their own internal knowledge base, and even give them the ability to write, run, and then troubleshoot the code they are working on.
Some of the AGI Implementations have been rather impressive and I think it's more productive to look at the future of AI not as a sole application, but rather the capabilities of when it's specifically trained to be the heart of a larger system.
Thanks for the serious response, comrade. There's kind of a bit of misunderstanding about what AI is in this thread and there's just one specific point I want to pop on before I respond to the rest of your comment, because I happen to see you made it. These models aren't performing statistical analysis; They're not like looking up a table of information and calculating what they should be responding to based on a calculated weighing of options. They are performing a mathematical function that outputs a statistically likely output for the information they have been trained on. The models are basically an obscenely complex equation with variables, known as weights, which have been created through the training process. One of the big misunderstandings around AI is thinking that these models are trying to simulate thinking and so we are gauging them using human standards of intelligence. But they are not; They are trying to Emulate it. And that is a going to be a very important distinction in the future when we start giving these models bodies and have to consider if their emulated version of pain is something that needs to be regulated. And for anyone who thinks these models will never be able to be considered thinking because they're just very complex Markov chains, I invite you to spend some time volunteering with dementia patients. There's nothing quite so horrifying as having to experience your loved one tell you a story over and over and over and fucking over again all because something triggered them.
So, anyway.😅
This can break down a task or query into individual steps, which could then theoretically be offloaded to models that can do other things than language querying, but that’s a big thing to just assume we’re going to have.
Most of these AGIs, all the ones I've seen so far lately, are using a library known as LangChains which exists specifically in preparation for that.
Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user’s input. In these types of chains, there is a “agent” which has access to a suite of tools. Depending on the user input, the agent can then decide which, if any, of these tools to call.
If we wanted to make a pizza delivery AI, we’d have to hook up body tracking equipment to pizza delivery workers and then document billions of pizza deliveries and THEN it might not even work because pizza delivery can be done to so many different places.
I personally think it's going to be a combination of VR and remote gig work. I'm not in support of it, it's just obviously where we're heading.
Einride | The world’s first Remote Operator completes her training - YouTube
Trucking veteran Tiffany Heathcott is ready to hit the road – but this time, she won’t be stepping inside a vehicle.
That’s because she’s completed training to become the world’s first Remote Operator. She will soon be monitoring and controlling Einride’s autonomous electric vehicles, remotely.
Amazon Scout is a 6 wheeled delivery robot used to deliver packages for multinational company Amazon.[1] Amazon Scout originally debuted on January 23, 2019, delivering packages to Amazon customers in Snohomish County, Washington.[2] Amazon scouts move on sidewalks, at a walking pace.[3][4] In August, 2019,[5] the robots started delivering packages to customers Irvine, California on a test basis, with human monitors.[6][7][8] The package is stored inside of the robot, and driven to the customer.[9])
It’s incredible how tech bros found a way to be the obnoxious weed guy but without ever smoking