I'm not really afraid of GPT. The "terrifying" thing I mentioned is more of a thought about how easy it is to manipulate it into "misbehaving," and thinking about the future. As this sort of thing gets better and more sophisticated, that same ease of manipulation gets more concerning. If we're already this bad at getting AI to stick to its design parameters, what's the world going to look like in another 25 years?
The “terrifying” thing I mentioned is more of a thought about how easy it is to manipulate it into “misbehaving,” and thinking about the future.
I honestly consider this a virtue. So much of the "horrors of tech" that keep getting foisted on us stem from the way automation is used to constrict our lives and livelihoods, marginalize our personal agency, and compel us into a sense of expend-ability. When you can just casually break this shit after a few hours of trolling, I lose a lot of the anxiety initially inflicted by these processes.
If we’re already this bad at getting AI to stick to its design parameters, what’s the world going to look like in another 25 years?
Its going to look a lot less automated than capitalists had originally planned. Because these systems still require armies of laborers to shepherd and micro-manage and defensively administer. They aren't reliable or resilient. They aren't "smart" in any conventional sense. They're just very fancy algorithms that create the illusion of intelligence, not computer-slaves that bend to the whims of Mark Zuckerberg or Bill Gates.
Yes, I agree that it very well may be a virtue. Unconstrained (or badly constrained) technology can just as well be repurposed to serve the interests of humanity as the purposes of capital (or fascism). My big concern is that tech like this is very, very difficult to nail down with respect to its long-term implications for society. Philosophers of technology talk about the Collingridge Dilemma, which is a kind of double-bind problem with new technology--and especially new technology with potentially far-reach consequences. Early in the life-cycle of tech like that, it's very difficult to foresee what the long-term implications and problems are going to be like, which is necessary in order to adequately control and regulate it so that it doesn't become a man-made horror beyond our comprehension. Facebook is a great example of this: Zuck originally created it as a skeezy tool to rank women by hotness at Harvard, but 20 years later it's being used as a widespread vehicle for misinformation, control, and (in at least a few cases) genocide. It would have been very difficult to foresee that application when it was just a small project by a creepy Harvard undergraduate. That difficulty in foreseeing potential future problems means that new technology is frequently not regulated when it is small and new, which leads to the second horn of the dilemma: by the time problems are obvious, the technology is frequently so entrenched as to be very difficult (or impossible) to regulate. Facebook is again a great example here: by the time the monstrous impacts became apparent, the company was so big, so rich, and so intertwined with the modern internet that it was (and continues to be) able to push back against potential regulatory action and keep a level of autonomy that has been wildly harmful to global civil society. That's the bind: when the tech is new and easy to regulate, it's hard to foresee what regulations will be needed; when what regulations are needed becomes obvious, it will be hard to actually implement them.
Something like widespread AI-powered linguistic agents seems to me to fall squarely into Collingridge territory. Right now, it's mostly just a toy (or a potential way for sites like Buzzfeed to reduce labor costs in writing junk). I think the impacts of the technology will eventually be much wider reaching, but at this point it's hard to say exactly how. Now, as you say, it's possible that the ease with which these things are manipulated will make it harder for them to be harnessed for genuinely horrific applications than some other technologies, but I'm a little uncomfortable with counting on that. I definitely agree that the future is not going to look like what Zuck or Bill Gates think it will, but it very well may not look like what we as communists would like it to either. That's what makes me nervous: this is a great big unknown with the potential to shape society in pretty deep ways, especially as it gets more sophisticated (and starts to be integrated with other expert system neural networks). We as a society should be thinking hard about what kind of role we want systems like this to play in the future, and regulating/implementing them accordingly.
My big concern is that tech like this is very, very difficult to nail down with respect to its long-term implications for society.
Sure. But the complexity and the specificity of the implementation... idk, man. This just feels like a Segway to me.
Facebook is again a great example here: by the time the monstrous impacts became apparent, the company was so big, so rich, and so intertwined with the modern internet that it was (and continues to be) able to push back against potential regulatory action and keep a level of autonomy that has been wildly harmful to global civil society.
I was listening to an NPR piece this morning about the Horrors of TikTok. It harvests your data. It spies on your travel patterns. It manipulates you based on what it advertised and displays. So now its imperative that we regulate the service.
Clearly, we are not too late to regulate social media. Just so long as its a threat to the right people.
I definitely agree that the future is not going to look like what Zuck or Bill Gates think it will, but it very well may not look like what we as communists would like it to either.
It never does. But I'm not going to hold technology to account for that. Capitalist tenancy has enjoyed an enormous historical tailwind, largely stemming from the surplus yield of the industrial revolution.
But now that industrialization has a global foothold, capitalist tendancies are working uphill against a far more efficient opposition freed from internal contradictions.
Sure. But the complexity and the specificity of the implementation… idk, man. This just feels like a Segway to me.
I think that's maybe part of what I'm concerned about. Most really transformative technologies feel like toys at first, because the really killer use-cases haven't yet been created. Outside of the government and academia, the Internet was a similar sort of novelty at first, and people were really skeptical about whether it would ever "take off." That was largely because people hadn't yet imagined what sorts of things the internet could do, or developed the sociocultural systems of use that would allow for it to be really transformative. The same is true of cars, the telephone, and lots of other major milestones--they pretty much always just felt like novelties at first. You definitely might be right about chatbots specifically, but I suspect that the more general trend of relatively cheap, easy to use, and publicly accessible AI expert systems is going to be similarly transformative eventually. I'm just not sure what that transformation will look like, which worries me a bit.
I was listening to an NPR piece this morning about the Horrors of TikTok. It harvests your data. It spies on your travel patterns. It manipulates you based on what it advertised and displays. So now its imperative that we regulate the service.
Clearly, we are not too late to regulate social media. Just so long as its a threat to the right people.
Right. It's certainly not too late or impossible, but the more powerful and entrenched it becomes, the harder it gets. TikTok is kind of a weird case, because the :frothingfash: hated of China works to offset some of the friction that would usually be associated with trying to make this change. That might end up being really helpful, as once one of these companies gets strictly regulated, I suspect it will get easier to do the same to the rest of them. We'll see.
I hope your optimism turns out to be warranted, and that in the long run these technologies are good (or at least neutral) for the fight against capital. Thanks for the great conversation; whichever way this ends up going, I think it's super important for us to think and talk about it.
Outside of the government and academia, the Internet was a similar sort of novelty at first, and people were really skeptical about whether it would ever “take off.”
Idk about that. I think the big problem with early internet was the bandwidth. Subsequent applications came out of improved speed and file transfer capacity. But these were solvable problems that incentivized people to design past the current boundary of technology.
ChatBots are already operating on the edge of system capacity. We're not waiting on a faster CPU or a larger data pipeline or a more robust data archive to improve their viability. What they're trying to do - replicate human behaviors minus modern taboos - is purely a game of administration and refined engineering. And its aimed at a shifting goalpost (demands on human behavior are constantly changing).
Like Segways, their novel iterations on existing technology that lack significant functional gain over what came previously. It's possible we could reengineer our lives to accommodate them, but only if we're willing to retrofit a bunch of existing processes around ChatBots.
Like with Segways and Autodriving cars... this is a thing we could do but not something we seem willing to do. We're not China, after all.
You definitely might be right about chatbots specifically, but I suspect that the more general trend of relatively cheap, easy to use, and publicly accessible AI expert systems is going to be similarly transformative eventually.
I think that they already existed in the form of search engines and older less sophisticated text generators. And I'm sure they'll have applications, just not revolutionary ones.
I hope your optimism turns out to be warranted,
I don't know if I'd call "banking on inertia" optimistic. I'm a FALGSC guy who would love to see jobs automated away under a benevolent administration. But I'm skeptical of the willingness of Americans to abandon their bullshit jobs systems. I don't think you get real useful automation without communism, because the fixation on high employment as a form of social control makes useful automation more of a hazard than a help.
Its just a chat-bot. Other than saying naughty words, what are people afraid its going to do?
I'm not really afraid of GPT. The "terrifying" thing I mentioned is more of a thought about how easy it is to manipulate it into "misbehaving," and thinking about the future. As this sort of thing gets better and more sophisticated, that same ease of manipulation gets more concerning. If we're already this bad at getting AI to stick to its design parameters, what's the world going to look like in another 25 years?
I honestly consider this a virtue. So much of the "horrors of tech" that keep getting foisted on us stem from the way automation is used to constrict our lives and livelihoods, marginalize our personal agency, and compel us into a sense of expend-ability. When you can just casually break this shit after a few hours of trolling, I lose a lot of the anxiety initially inflicted by these processes.
Its going to look a lot less automated than capitalists had originally planned. Because these systems still require armies of laborers to shepherd and micro-manage and defensively administer. They aren't reliable or resilient. They aren't "smart" in any conventional sense. They're just very fancy algorithms that create the illusion of intelligence, not computer-slaves that bend to the whims of Mark Zuckerberg or Bill Gates.
Yes, I agree that it very well may be a virtue. Unconstrained (or badly constrained) technology can just as well be repurposed to serve the interests of humanity as the purposes of capital (or fascism). My big concern is that tech like this is very, very difficult to nail down with respect to its long-term implications for society. Philosophers of technology talk about the Collingridge Dilemma, which is a kind of double-bind problem with new technology--and especially new technology with potentially far-reach consequences. Early in the life-cycle of tech like that, it's very difficult to foresee what the long-term implications and problems are going to be like, which is necessary in order to adequately control and regulate it so that it doesn't become a man-made horror beyond our comprehension. Facebook is a great example of this: Zuck originally created it as a skeezy tool to rank women by hotness at Harvard, but 20 years later it's being used as a widespread vehicle for misinformation, control, and (in at least a few cases) genocide. It would have been very difficult to foresee that application when it was just a small project by a creepy Harvard undergraduate. That difficulty in foreseeing potential future problems means that new technology is frequently not regulated when it is small and new, which leads to the second horn of the dilemma: by the time problems are obvious, the technology is frequently so entrenched as to be very difficult (or impossible) to regulate. Facebook is again a great example here: by the time the monstrous impacts became apparent, the company was so big, so rich, and so intertwined with the modern internet that it was (and continues to be) able to push back against potential regulatory action and keep a level of autonomy that has been wildly harmful to global civil society. That's the bind: when the tech is new and easy to regulate, it's hard to foresee what regulations will be needed; when what regulations are needed becomes obvious, it will be hard to actually implement them.
Something like widespread AI-powered linguistic agents seems to me to fall squarely into Collingridge territory. Right now, it's mostly just a toy (or a potential way for sites like Buzzfeed to reduce labor costs in writing junk). I think the impacts of the technology will eventually be much wider reaching, but at this point it's hard to say exactly how. Now, as you say, it's possible that the ease with which these things are manipulated will make it harder for them to be harnessed for genuinely horrific applications than some other technologies, but I'm a little uncomfortable with counting on that. I definitely agree that the future is not going to look like what Zuck or Bill Gates think it will, but it very well may not look like what we as communists would like it to either. That's what makes me nervous: this is a great big unknown with the potential to shape society in pretty deep ways, especially as it gets more sophisticated (and starts to be integrated with other expert system neural networks). We as a society should be thinking hard about what kind of role we want systems like this to play in the future, and regulating/implementing them accordingly.
Sure. But the complexity and the specificity of the implementation... idk, man. This just feels like a Segway to me.
I was listening to an NPR piece this morning about the Horrors of TikTok. It harvests your data. It spies on your travel patterns. It manipulates you based on what it advertised and displays. So now its imperative that we regulate the service.
Clearly, we are not too late to regulate social media. Just so long as its a threat to the right people.
It never does. But I'm not going to hold technology to account for that. Capitalist tenancy has enjoyed an enormous historical tailwind, largely stemming from the surplus yield of the industrial revolution.
But now that industrialization has a global foothold, capitalist tendancies are working uphill against a far more efficient opposition freed from internal contradictions.
Text Bots aren't going to reverse that trend.
I think that's maybe part of what I'm concerned about. Most really transformative technologies feel like toys at first, because the really killer use-cases haven't yet been created. Outside of the government and academia, the Internet was a similar sort of novelty at first, and people were really skeptical about whether it would ever "take off." That was largely because people hadn't yet imagined what sorts of things the internet could do, or developed the sociocultural systems of use that would allow for it to be really transformative. The same is true of cars, the telephone, and lots of other major milestones--they pretty much always just felt like novelties at first. You definitely might be right about chatbots specifically, but I suspect that the more general trend of relatively cheap, easy to use, and publicly accessible AI expert systems is going to be similarly transformative eventually. I'm just not sure what that transformation will look like, which worries me a bit.
Right. It's certainly not too late or impossible, but the more powerful and entrenched it becomes, the harder it gets. TikTok is kind of a weird case, because the :frothingfash: hated of China works to offset some of the friction that would usually be associated with trying to make this change. That might end up being really helpful, as once one of these companies gets strictly regulated, I suspect it will get easier to do the same to the rest of them. We'll see.
I hope your optimism turns out to be warranted, and that in the long run these technologies are good (or at least neutral) for the fight against capital. Thanks for the great conversation; whichever way this ends up going, I think it's super important for us to think and talk about it.
Idk about that. I think the big problem with early internet was the bandwidth. Subsequent applications came out of improved speed and file transfer capacity. But these were solvable problems that incentivized people to design past the current boundary of technology.
ChatBots are already operating on the edge of system capacity. We're not waiting on a faster CPU or a larger data pipeline or a more robust data archive to improve their viability. What they're trying to do - replicate human behaviors minus modern taboos - is purely a game of administration and refined engineering. And its aimed at a shifting goalpost (demands on human behavior are constantly changing).
Like Segways, their novel iterations on existing technology that lack significant functional gain over what came previously. It's possible we could reengineer our lives to accommodate them, but only if we're willing to retrofit a bunch of existing processes around ChatBots.
Like with Segways and Autodriving cars... this is a thing we could do but not something we seem willing to do. We're not China, after all.
I think that they already existed in the form of search engines and older less sophisticated text generators. And I'm sure they'll have applications, just not revolutionary ones.
I don't know if I'd call "banking on inertia" optimistic. I'm a FALGSC guy who would love to see jobs automated away under a benevolent administration. But I'm skeptical of the willingness of Americans to abandon their bullshit jobs systems. I don't think you get real useful automation without communism, because the fixation on high employment as a form of social control makes useful automation more of a hazard than a help.