My worry in 2021 was simply that the TESCREAL bundle of ideologies itself contains all the ingredients needed to “justify,” in the eyes of true believers, extreme measures to “protect” and “preserve” what Bostrom’s colleague, Toby Ord, describes as our “vast and glorious” future among the heavens.
Golly gee, those sure are all the ingredients for white supremacy these folk are playing around with what, good job there are no signs of racism... right, right?!?!
In other news, I find it wild that big Yud has gone on an arc from "I will build an AI to save everyone" to "let's do a domestic terrorism against AI researchers." He should be careful, someone might this this is displaced rage at his own failure to make any kind of intellectual progress while academic AI researchers have passed him by.
(Idk if anyone remembers how salty he was when AlphaGo showed up and crapped all over his "symbolic AI is the only way" mantra, but it's pretty funny to me that the very group of people he used to say were incompetent are a "threat" to him now they're successful. Schoolyard bully stuff and wotnot.)
In other news, I find it wild that big Yud has gone on an arc from “I will build an AI to save everyone” to “let’s do a domestic terrorism against AI researchers.” He should be careful, someone might this this is displaced rage at his own failure to make any kind of intellectual progress while academic AI researchers have passed him by.
disclaimer/framing: the 'ole yudster only came to my attention fairly recently, so the following is observation/speculation (and I'll need some more evidence/visibility to see if the guess pans out)
a few years ago I happened to deal with someone who is a hell of a grifter - in intensity, scope, impact. it was primarily through that experience which I gained handle on a number of things that've served me well in spotting it in other things. some things I've been observing under that light:
- he's clearly talking out of his ass almost all the time
- shell game applies
- I think 'ole yuddy is aware that he's not as clever as he claims he is, and is very salty about that[0]
no-op line to make lemmy newline better
(1) and (2) means he has to continuously keep ahead of the marks ^W rats. the guy is fairly clearly some kind of widely read/informed, and can manage to deal with some kind of complexity[1] in concepts. but because (3) - he can never be as right as he wants to be, so he has to keep pivoting the grift to a new base before he gets egg on his face. his method for doing this is "abandon all hope" but practically it's an attempt to retcon history, and likely if anyone tried to really engage him on it he'd get ragey and blame them on working on "outdated information" or some other shit (because lol who needs acknowledging their own past actions amirite)[2]
[0] - this is a guess from my side, but all his "imagine a world in which einstein wasn't exceptional, because there's many of them" shit comes through to me in this way. anyone else?
[1] - not very well, of course, this is why the multi-million word vomits exist, but "some".
[2] - this is something I've seen with narcissists a lot - they can never be wrong, and "making them" be wrong (i.e. simply providing proof of past actions/statements) gets them going nuclear
My perspective is a little different (from having met him), I think he genuinely believed a lot of what he said at one point at least ... but you're pretty much spot on in all the ways that matter, he's a really bad person of the should probably be in jail for crimes kind.
The line between “actually believes $x” and “appears to actually believe $x” can be made heeeeeella fuzzy (and people in that space take advantage of that)
Curious about the latter half of your remarks. Is that opinion, or something grounded in other knowledge that isn’t widely known yet?
Good point with the line! Some of the best liars are good at pretending to themselves they believe something.
I don't think its widely known, but it is known, (old sneeeclub posts about it somwhere) that he used to feed the people he was dating LSD and try to convince them they "depended" on him.
First time I met him, in a professional setting, he had his (at the time) wife kneeling at his feet wearing a collar.
Do I have hard proof he's a criminal? Probably not, at least not without digging. Do I think he is? Almost certainly.
First time I met him, in a professional setting, he had his (at the time) wife kneeling at his feet wearing a collar.
hold on, you can’t just write this paragraph and then continue on as if it’s not a whole damn thing
ah yes the first time I met yud he non-consensually involved me in his bondage play with his wife (which he somehow incorporated into a business meeting)
😅 honestly I don't know what else to say, the memory haunts me to this day. I think it was the point when I started going "huh, the rats make weirdly dumb mistakes considering they've made posts exactly about these kinds of error" to "wait, there's something really sinister going on here"
Can you say where and when this happened without doxxing yourself? Was anyone else around while he and his wife were doing this?
Personally I imagine him as a cult leader of a flying saucer cult where suddenly an alien vehicle is actually arriving. He's running around panicking tearing his hair out because this wasn't actually what he planned, he just wanted money and bitches as a cult leader. And because it's one thing to say the aliens will beam every cult member up and take them to paradise, but if you see a multi-kilometer alien vehicle getting closer to earth, whatever it's intentions are no one is going to be taken to paradise...
academic AI researchers have passed him by.
Just to be pedantic, it wasn't academic AI researchers. The current era of AI began here : https://www.npr.org/2012/06/26/155792609/a-massive-google-network-learns-to-identify
Academic AI researchers have never had the compute hardware to contribute to AI research since 2012, except some who worked at corporate giants (mostly deepmind) and went back into academia.
They are getting more hardware now, but the hardware required to be relevant and to develop a capability that commercial models don't already have keeps increasing. Table stakes are now something like 10,000 H100s, or about 250-500 million in hardware.
https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini
I am not sure MIRI tried any meaningful computational experiments. They came up with unrunnable algorithms that theoretically might work but would need nearly infinite compute.
As you were being pedantic, allow me to be pedantic in return.
Admittedly, you might know something I don't, but I would describe Andrew Ng as an academic. These kinds of industry partnerships, like the one in that article you referred to, are really, really common in academia. In fact, it's how a lot of our research gets done. We can't do research if we don't have funding, and so a big part of being an academic is persuading companies to work with you.
Sometimes companies really, really want to work with you, and sometimes you've got to provide them with a decent value proposition. This isn't just AI research either, but very common in statistics, as well as biological sciences, physics, chemistry, well, you get the idea. Not quite the same situation in humanities, but eh, I'm in STEM.
Now, in terms of universities having the hardware, certainly these days there is no way a university will have even close to the same compute power that a large company like Google has access to. Though, "even back in" 2012, (and well before) universities had supercomputers. It was pretty common to have a resident supercomputer that you'd use. For me, and my background's orginally in physics, back then we had a supercomputer in our department, the only one at the university, and people from other departments would occasionally ask to run stuff on it. A simpler time.
It's less that universities don't have access to that compute power. It's more that they just don't run server farms. So we pay for it from Google or Amazon and so on, like everyone in the corporate world---except of course the companies that run those servers (they still have to pay costs and lost revenue). Sometimes that's subsidized by working with a big tech company, but it isn't always.
I'm not even going to get into the history of AI/ML algorithms and the role of academic contributions there, and I don't claim that the industry played no role; but the narrative that all these advancements are corporate just ain't true, compute power or no. We just don't shout so loud or build as many "products."
Yeah, you're absolutely right that MIRI didn't try any meaningful computation experiments that I've seen. As far as I can tell, their research record is... well, staring at ceilings and thinking up vacuous problems. I actually once (when I flirted with the cult) went to a seminar that the big Yud himself delivered, and he spent the whole time talking about qualia, and then when someone asked him if he could describe a research project he was actively working on, he refused to, on the basis that it was "too important to share."
"Too important to share"! I've honestly never met an academic who doesn't want to talk about their work. Big Yud is a big let down.
A joke I heard in the last century: Give a professor a nickel and they'll talk for an hour. Give 'em a quarter and you'll be in real trouble.
The description of how utopians see critics ("profoundly immoral people who block the path to utopia, threatening to impede the march toward paradise, arguably the greatest moral crime one could commit") is extremely similar to the way scientologists see their critics and ex-members. I suppose at least TESCREALists have a slightly higher measure of independence than scientologists and are thus less likely to be convinced to poison a critic's dog or send them threatening letters.
Not just Scientology; Peoples Temple, Aum, or Shining Path all apply too.
Edit: they also have more money and political influence at this point then the above. this isn’t good imo.
Yeah it is the classic cult characteristic. Synanon members putting a snake in someone's letterbox is another example. Also Hare Krishnas, MOVE, etc etc.
The one issue I have is that "what if some are their beliefs turn out to be real". How would it change things if Scientologists get a 2 way communication device, say they found it buried in Hubbard's backyard or whatever and it appears to be non human technology - and are able to talk to an entity who claims it is Xenu. Doesn't mean their cult religion is right but say the entity is obviously nonhuman, it rattles off the method to build devices current science knows no method to build and other people build the devices and they work and YOU can pay $480 a year and get FTL walkie talkies or some shit sent to your door. How does that change your beliefs?
while it's true that if my dick had wings, it would be a magical flying unicorn pony, so far this hasn't been shown to be the case at all, so i'm not putting effort into the hypothetical
What? I was describing how cults/high-control groups react to criticism. I wasn't trying to assess how accurate their beliefs are. Cults rely on having some beliefs which reasonable people might agree with. Those are the beliefs they present to the public. Cult literature often sounds plausible or benign even if it's not factually accurate.
Before there was greater awareness of what cults are and how they work, it wasn't uncommon for early press about cult groups to conclude that while some of the cult's beliefs were strange, they had good values and were doing good things for their communities so they were probably harmless. It was only later that stories begin to emerge about the extreme levels of control that cults were exercising over their members, how that control led to the exploitation and abuse of members, and how limited and transactional their "good works" were.
If a group with that model of control and exploitation claimed to have access to a source of genuinely new and scientifically significant knowledge, they are the worst people to be in control of it, because: a) Cults keep back the larger part of their beliefs from the public in order to extract as much in money, volunteer time and other resources from their members. If a cult did have a direct line to Xenu, it would be directly in their interests to strictly limit how much other people can know about Xenu without paying exorbitant fees and submitting to cult authority. b) Cults are run by people whose ethics are compromised. Cult leaders believe above all else in their right to power and/or wealth and everything else including the health and safety of others comes second. They bully and indoctrinate their subordinates until cult members believe that there is no good and bad so much as there are things that are good for the cult and things that are bad for the cult. If people with such compromised ethics gain access to Xenu's special information (why are we assuming Xenu will be wise and helpful anyway? In Scientology mythos, Xenu is evil. And also dead.) they will use it to improve the position of the cult and impose their beliefs on as many people as possible. c) due to the above mentioned, it will be extremely difficult for non-members to assess the accuracy of information provided by the cult.
Interesting read. I've read a few articles about the negatives of longtermism before this, but this was the first one that actually made sense