Do you think the problems you outlined are solvable even in theory, or must humans slog along at the current pace for thousands of years to solve medicine?
Do you think the problems you outlined are solvable even in theory, or must humans slog along at the current pace for thousands of years to solve medicine?
Next time it would be polite to answer the fucking question.
Sorry sir:
*I have to ask, on the matter of (2): why? * I think I answered this.
What’s being signified when you point to “boomer forums”? That’s an “among friends” usage: you’re free to denigrate the boomer fora here. And > then once again you don’t know yet if this is one of those “boomer forums”, or you wouldn’t have to ask.
What people in their droves are now desperate to ask, I will ask too: which is it dummy? Take the stopper out of your speech hole and tell us how > you really feel.
I am not sure what you are asking here, sir. It's well known to those in the AI industry that a profound change is upon us and that GPT-4 shows generality for it's domain, and robotics generality is likely also possible using a variant technique. So individuals unaware of this tend to be retired people who have no survival need to learn any new skills, like my boomer relatives. I apologize for using an ageist slur.
Primary myoblasts double on average every 4 days! So if given infinite nutrients, and you started with 1 gram of meat, it would take .... 369 days to equal the mass of earth!
*removed externally hosted image*
Doesn't the futurism/hopium idea of building an ideal city go back to Disney? Who does more or less have feudal stronghold rights over florida?
https://en.wikipedia.org/wiki/EPCOT_(concept)
Because of these two modes of transportation, residents of EPCOT would not need cars. If a resident owned a car, it would be used "only for weekend pleasure trips."[citation needed] The streets for cars would be kept separate from the main pedestrian areas. The main roads for both cars and supply trucks would travel underneath the city core, eliminating the risk of pedestrian accidents. This was also based on the concept that Walt Disney devised for Disneyland. He did not want his guests to see behind-the-scenes activity, such as supply trucks delivering goods to the city. Like the Magic Kingdom in Walt Disney World, all supplies are discreetly delivered via tunnels.
Or The Line in Saudi Arabia.
Definely Sneer-worthy, though it's sometimes worked. Napoleon redesigned Paris, which is probably a good thing. But they are stuck with that design to this day, which is probably bad.
Thanks x2. Thanks also for the humility to admit I might be correct, even if I am an interloper.
The counter argument is GPT-4. For the domains this machine has been trained on it has a large amount of generality - a large amount of capturing that real world complexity and dirtiness. Reinforcement learning can make it better.
Or in essence, if you collect colossal amounts of information, yes pirated from humans, and then choose what to do next by 'what would a human do', this does seem to solve the generality problem. You then fix your mistakes with RL updates when the machine fails on a real world task.
Did this happen with Amazon? The VC money is a catalyst. It's advancing money for a share of future revenues. If AI companies can establish a genuine business that collects revenue from customers they can reinvest some of that money into improving the model and so on.
OpenAI specifically seems to have needed about 5 months to go to 1 billion USD annual revenue, or the way tech companies are valued, it's already worth more than 10 billion intrinsic value.
If they can't - if the AI models remain too stupid to pay for, then obviously there will be another AI winter.
https://fortune.com/2023/08/30/chatgpt-creator-openai-earnings-80-million-a-month-1-billion-annual-revenue-540-million-loss-sam-altman/
I agree completely. This is exactly where I break with Eliezer's model. Yes obviously an AI system that can self improve can only do so until it's either (1) the best algorithm that can run on the server farm (2) finding a better algorithm takes more compute than is worth the investment in current compute
That's not a god. You do this in an AI experiment now and it might crap out at double or less the starting performance and not even be above the SOTA.
But if robots can build robots, and the current AI progress shows a way to do it (foundation model on human tool manipulation), then...
Genuinely asking, I don't think it's "religion" to suggest that a huge speedup in global GDP would be a dramatic event.
Current the global economy doubles every 23 years. Robots building robots and robot making equipment can probably double faster than that. It won't be in a week or a month, energy requirements alone limit how fast it can happen.
Suppose the doubling time is 5 years, just to put a number on it. So the economy would be growing a bit over 16 times faster than it was previously. This continues until the solar system runs out of matter.
Is this a relevant event? Does it qualify as a singularity? Genuinely asking, how have you "priced in" this possibility in your world view?
take some time and read this
I read it. I appreciated the point that human perception of current AI performance can scam us, though this is nothing new. People were fooled by Eliza.
It's a weak argument though. For causing an AI singularity, functional intelligence is the relevant parameter. Functional intelligence just means "if the machine is given a task, what is the probability it completes the task successfully". Theoretically an infinite chinese room can have functional intelligence (the machine just looks up the sequence of steps for any given task).
People have benchmarked GPT-4 and it's got general functional intelligence at tasks that can be done on a computer. You can also just go pay up $20 a month and try it. It's below human level overall I think, but still surprisingly strong given it's emergent behavior from computing tokens.
I appreciated this post because it never occurred to me that the "thumb might be on the scales" for the "rules for discourse" that seems to be the norm around the rat forms. I personally ignore most of it, however, the "ES" rat phrase is simply saying, "I know we humans are biased observers, this is where I'm coming from". If the topic were renewable energy and I was the 'head of extraction at BP', you can expect that whatever I have to say is probably biased against renewable energy.
My other thought reading this was : what about the truth. Maybe the mainstream is correct about everything. "Sneer club" seems to be mostly mainstream opinions. That's fine I guess but the mainstream is sometimes wrong about issues that have been poorly examined or near future events. The collective opinions of everyone don't really price in things that are about to happen, even if it's obvious to experts. For example, the mainstream opinion on covid was usually lagging several weeks behind Zvi's posts on lesswrong.
Where I am going with this is you can point out bad arguments on my part, but I mean in the end, does truth matter? Like are we here to score points on each other or share what we think reality is or will in the very near future be?
And just to be clear, for one to be "lost in the AI religion", the claims have to be false, correct? We will not see the things I mentioned within the timeframe I gave (7 years, 17 years, and implicitly if there is not immediate progress towards the nearer deadline within 1 year it's not going to happen).
Google's Gemini will not be multimodal, be capable of learning to do tasks by reinforcement learning to human level, right? Robotics foundation models will not work.
Real talk: a real doll with the brain of a calculator would be a substantial product improvement.
Serious answer not from yudnowsky: the AI doesn't do any of that. It helps people cheat on their homework, write their code and form letters faster, and brings in revenue. AI owner uses the revenue and buys gpus. With the GPUs they make the AI better. Now it can do a bit more than before and then they buy more GPUs and theoretically this continues until the list of tasks the AI can do includes "most of the labor in a chip fab" and GPUs become cheap and then things start to get crazy.
Same elementary school logic but I mean this is how a nuke works.
They also hyped autonomous cars and the Internet itself including streaming video for years before it was practical. Your filter of "it's all hype" only works 99 percent of the time.
Just I think to summarize your beliefs: rationalists are wrong about a lot of things and assholes. And also the singularity (which predates yuds existence) is not in fact possible by the mechanism I outlined.
I think this is a big crux here. It's one thing if its a cult around a false belief. It's kind of a problem to sneer at a cult if the core S of it happens to be a true law of nature.
Or an analogy. I think gpt-4 is like the data from the Chicago pile. That data was enough to convince the domain experts then a nuke was going to work to the point they didn't test Fat Man, you believe not. Clearly machine generality is possible, clearly it can solve every problem you named including, with the help of humans, ordering every part off digikey and loading the pick and place and inspecting the boards and building the wire harnesses and so on.
Just to be clear, you can build your own telescope now and see the incoming spacecraft.
Right now you can go task GPT-4 with solving a problem about equal to undergrad physics, let it use plugins, and it will generally get it done. It's real.
Maybe this is the end of the improvements, just like maybe the aliens will not actually enter orbit around earth.
Sure, but they were 4 function calculators a few months ago. The rate of progress seems insane.
My experience in research indicates to me that figuring shit out is hard and time consuming, and “intelligence” whatever that is has a lot less to do with it than having enough resources and luck. I’m not sure why some super smart digital mind would be able to do science much faster than humans.
That's right. Eliezer's LSD vision of the future where a smart enough AI just figures it all out with no new data is false.
However, you could...build a fuckton of robots. Have those robots do experiments for you. You decide on the experiments, probably using a procedural formula. For example you might try a million variations of wing design, or a million molecules that bind to a target protein, and so on. Humans already do this actually in those domains, this is just extending it.
Nail on the head. Especially on the internet/'tech bro' culture. All my leads at work also have such a, "extreme OCD" kinda attitude. Sorry if you feel offended emotionally, I didn't mean it.
The rest of your post is ironically very much something that Eliezer posits a superintelligence would be able to do. Or from the anime Death Note. I use a few words or phrases, you analyze the shit out of them and try to extract all the information you can and have concluded all this stuff like
opening gambit
“amongst friends”
hiding all sorts of opinions behind a borrowed language
guff about “discovering reality”
real demands as “getting with the right programme”,
allegedly, scoring points “off each other”
Off each other” was another weasel phrase
you know that at least at first blush you weren’t scoring points off anyone
See everything you wrote above is a possibly correct interpretation of what I wrote. It's like the english lit analysis after the author's dead. Eliezer posits a superintelligence could use this kind of analysis to convince operators with admin authority to break the rules, or L in death note uses this to almost catch the killer.
It's also all false in this case. (it's also why a superintelligence probably can't actually do this) I've been on the internet long enough to know it is almost impossible to convince someone of anything, unless they already were willing and you just link some facts they didn't know about. So my gambit actually something very different.
Do you know how you get people to answer a question on the internet? To post something that's wrong*. And it clearly worked, there's more discussion on this thread than this entire forum in several pages, maybe since it was created.
*ironically in this case I posted what I think is the correct answer but it disagrees with your ontology. If I wanted lesswrongers to comment on my post I would need a different OP.