The article is pretty silly at certain points. It presents AI generated text saying something as if it's a conscious belief that's being expressed, when you could easily get AI generated text to say whatever you want it to with the right prompts and enough tries. Then it suggests that AI will cause a breakdown of agreed-upon reality because everyone will question if a photo is real or fake, but as it mentions, it's long been possible to create fake images with Photoshop. There's even that famous photo with Stalin where the other guy got removed from it, so it's nothing new.
Which honestly is probably where this whole preoccupation with fake images comes from, the whole idea of a "literally 1984" dystopia. The reality is that it's much easier to mislead someone by how you emphasize information than by telling outright lies. It's not a lie that Al Gore owns (owned?) a private jet, and if you frame it in just such a way, you can get the audience to draw the conclusion that that means climate change isn't actually a big deal. At any moment, there are countless things happening all over the world, and some of those things provide evidence for things that are false, if only by random chance. It's sort of like if you looked at all the text AIs are putting out, then singled out some text that fits the conclusion you want, then put that text directly after a bunch of creepy images. If you just cherry-pick and frame things a certain way, you can create a perception that we've created a monsterous, sapient entity that's begging us to stop. Does this phenomenon ever happen with non-creepy images? The author never asks that :thonk:
Ultimately there's simply no need to lie by making things up because it's much easier to lie by telling the truth.
yeah getting another AI to comment on loab is silly as it's at best a result of weird training data in a different AI
they aren't thinking entities and they don't have ideas
I'll argue a bit that there's a difference between photo manipulation and creating evidence of entirely fabricated scenarios out of thin air. But I agree with you nonetheless
The scariest thing about deep fakes is that the powerful will be able to escape the truth by claiming the truth is a deep fake.
Personally I'm pro "they added the guy in next to Stalin" theory.
Bruh half of those photos do not depict the same person. Whoever wrote this article is a creepypasta fan.
Pretty creepy but im going to guess that this is an art project someone is doing and not a real thing that is happening in DALL-E/StableDiffusion
It is a false pattern. That face is the average of all the face data. So if you get one of a few kinds of error it just gives your that face. It is like the old pokemon glitches but it is scary to us because it is a human face
Another program, GPT-3, generates human-like speech, and we asked it to speak on Loab’s behalf by imitating her.
Like AI-image generators, this tool was trained on a dataset as vast as the internet itself.
What follows is an excerpt of our conversation, edited for length.
lmao how is that supposed to mean anything, it'd be ridiculous enough to prompt the same ai to "explain itself" as if it's actually conscious but you're just asking gpt-3 about images made by a completely different ai
there is no story here except the one weaved from nothingness by this author
It’s an interesting article but I doubt humans will ever create true consciousness.
I mean, humans create true consciousness every day. Maternity Wards are filled with humans creating true consciousnesses.
Its definitely a "solvable" problem from an engineering sense. But I don't think humans (or, at least, Westerners) want a True Artificial Consciousness. You can see that on the other end of the spectrum - the effort by private industry to take real human consciousnesses and strip them down until they are simple programmable python templates.
Our society and culture is rotten at its foundations already and ‘Ai’ will accelerate things and produce more useless novelty and make more useless jobs redundant but i don’t really see any major shift.
I think there's going to be the same natural struggle and contradiction between how technology is presented and how its used that we've always seen. The internet generates more spam than useful content or connectivity. TV/Radio is mostly for olds now and no longer remotely as trustworthy as it used to be.
I think the future of AI is just going to be more of the same. We'll just automate advertisements in a way that makes online outreach less and less reliable, until we've choked off the avenue of communication from any kind of productive use.
I always read it as christian brain worms persisting in someone who thinks they've moved on.
So much modern atheism is a reaction to religious trauma as much as anything else.
there isn't a ghost this is probably a weird response to some gore in the training data. It's interesting how this happened but the problem is statistical in nature
the image generators can't look at images and feel emotion they can associate them with the names their statistics generates as it laearned to do from it's training
The AI doesn't curate and present content to you. there is plenty of gore that could be shown to you but isn;t
I'd never heard of this but damn that's pretty cool, time to find a job helping create AI so life can become easier I guess.
Rokos basilisk is just some up his own ass nerd re-creating pascal's wager and thinking he invented something new
yeah seems pretty different to me, but stems from the same ideas
The basic concept in both is what if a completely unprovable thing exists? If it does, it would behoove you to act as though it were real. Pascal's wager is what if god exists, Roko's basilisk is "what if god exists, but with blinky LED lights?".
Either way, completely useless. If you want me to believe in something, convince me of it, don't try to convince me to pretend to believe in it. I'm not hedging on metaphysical bets without evidence.
Mostly accurate, but the thing you're not mentioning is that both Pascal and Roko scared themselves with Big Numbers. In both cases, the cost for acting as a true believer is manageable, human scale, but the cost for the unbeliever is near infinite. So (their thinking goes), even if the existence of God or the omnipotent singleton AI is very, very unlikely, the rational thing to do is to behave as if they did.
Now, to an outsider, it's clear that you can imagine an infinity of mutually contradictory infinite threats, which makes these arguments totally bogus. But if you are already a true believer, you discount the other threats.
Personally I chose to take Cthulhu's Wager seriously and went completely, irrevocably insane. IA! IA!
I'm not trying to argue about the concept, just saying that when you look at these things with a reductive lens, you can make anything the same.
The novel part of Roko’s basilisk is the time loop component where the AI doesn't exist in the present, but exists in the future and has the ability to manipulate events for itself coming into existence. It's kind of a dumb theory but whatever. It makes for a better movie than a TOE.
I guess I don't see how the novel part is particularly novel, it's just the shoehorn needed to turn "what if god could damn me to hell" into "what if future AI could damn me to hell"
Because it isn't metaphysical, it's science fiction/speculation and the "hell" isn't an other place/plane of reality. I guess this conversation could go in the direction that quantum theories allude to the same unexplained phenomenon as religions and could eventually meet. None of this is particularly interesting to me, honestly, so have a good night.
NO. Bad. your internet privileges have been revoked until you read all of saint Augustine to learn how banal and ridiculous this is.
I would strongly suggest don't. Generations of Christian theologist pedants didn't waste their lives arguing about how many angles could dance on the head of a pin for us to start taking bs like that seriously again.
The joke of Roko's Basilisk is how quickly it becomes a self-fulfilling prophecy. The nut of the idea isn't even unique to AI. Its just a new twist on an old "We have to kill them before they kill us" theme.
As soon as you extrapolate the idea out to rival populations, you're not just dealing with "The Roko's Basilisk", but "China's Basilisk versus America's Basilisk" with the subtext that one of us has to build it first before the other unleashes it on us. Its in the same vein as turn-of-the-20th-century racists insisting that White Slavery is just around the corner if black people get too rich or too well-enfranchised. Or anti-migrant xenophobes who believe The Illegals Are Stealing Our Jobs. Or the drug warriors who insist cartels will take over the country if we're not constantly fighting to criminalize drug use. Or the Nuclear Arms Race.
Roko's Basilisk is another incarnation of the proto-Hobbseian belief in a war of All Against All. It isn't something we will build so much as something we've been building in various flavors and styles since the nation was founded.
This is a big part of the metaplot of Ecplise Phase. No one is entirely sure what happened that caused the TITAN AIs to go rogue and bring about The Fall and the destruction of 90% of transhumanity, but one of the theories is that the USA's AIs were given free-reign to self-improve in order to counter China's super-AI project and things went very, very badly.
Regardless of the articles tone/content, fucked up that completely unrelated prompts can return those kinds of images. Goreposting on the early internet really fucked me up. I dont think we need an endless supply of unimaginably traumatic gore. The unrealness of it could make it even scarier to a teen i feel like, too.
In another ten years, after AI behaviour has been studied academically (which I feel like AI developers are not in a hurry to facilitate, so that the product preserves its mystique) we're all going to be super-jaded about this.
Like, someone's going to notice something like this and someone else is going to say "oh, yeah, that's just the Chang-Plimpton effect. It happens when multiple [hoozitz]-type parameters are very high in the source image, essentially creating a feedback loop in the [whatchamacallit]. "
I mean, I'm jaded about it now. The article - as written - feels like some Creepypasta I'd have seen on 4chan twenty years ago.
Oooo! A mysterious uncanny-valley ghost-woman image is popping up in the back of all my negative-of-a-negative-of-a-negative search results. Lets try to mystify this into a supernatural phenomenon, rather than realize it for a simple AI heuristic scraping the bottom of the logical barrel.
Someone at ABC News needed a fresh spin on the topic of AI Art, which was already saturating media markets. So they wrote a ghost story about AI Art (or, more likely, found a ghost story and slapped a journalist veneer over the top). I wouldn't even be surprised if someone used a chatbot to reskin the old Pokemon urban legend about an IRL kid who was killed by a haunted copy of the game.
So, this probably means nothing. Probably just a bunch of pictures of the same/ similar looking woman got into the training set. BUT I looked at the Twitter thread and my interest is piqued about similar images being generated by asking for (the opposite of) nonsense text in an image. That could be good insight into how these details are embedded in the neural network.
I'm begging authors that don't know how machine learning works to stop talking out of their ass for the love of god. Things like this are only the beginning of this new paranoia. People are gonna get really weird when like GPT4 comes out, and this could lead to mildly funny stuff (like the google guy that imploded his career by suing in the name of a chatbot) but also much darker shit (people shot up animation studios for less ffs)
Not saying that our usual discussion around automation/attribution issues is dumb, far from it, that's important. But the model isn't sentient, there's no ghost in the machine, this isn't a movie, stop.
Aren't the images created by AI based on manipulating existing images?