The question of whether or not we have the rudiments for creating a sentient AI is incredibly controversial, and I would hesitate to call what GANs do "intuition". How much training data would a GAN need to pass the Turing Test?
The question of whether strong AI is even possible has been debated for ages, and still is. While I'm skeptical of current machine learning techniques leading to true AI, I think the cellular brain modeling idea is a joke.
It looks like the Blue Brain project has been a dud, considering recent articles about it, and the head of the project was ousted in 2016. Outside of the ridiculous "1 human brain = 1000 rat brains" claim, it's not clear what this is even supposed to achieve.
Let's say you finish creating your model of a human brain. What then? You've got a digital cellular representation of what is at best a baby brain. How does it learn?
And yeah sure, I'm not convinced that it's impossible by any stretch. But I don't think the way to go about it is the way people have been trying for at least half a century. This Kurzweilian concept that Moore's law will allow us to brute-force the creation of a digital mind any day now is wacky.
We're assuming artificially intelligent here means with a human-like conscience and self-awareness right?
Because obviously that would be fucked up but I certainly won't say I care though about a robot without sentience
deleted by creator
The question of whether or not we have the rudiments for creating a sentient AI is incredibly controversial, and I would hesitate to call what GANs do "intuition". How much training data would a GAN need to pass the Turing Test?
deleted by creator
The question of whether strong AI is even possible has been debated for ages, and still is. While I'm skeptical of current machine learning techniques leading to true AI, I think the cellular brain modeling idea is a joke.
It looks like the Blue Brain project has been a dud, considering recent articles about it, and the head of the project was ousted in 2016. Outside of the ridiculous "1 human brain = 1000 rat brains" claim, it's not clear what this is even supposed to achieve.
Let's say you finish creating your model of a human brain. What then? You've got a digital cellular representation of what is at best a baby brain. How does it learn?
And yeah sure, I'm not convinced that it's impossible by any stretch. But I don't think the way to go about it is the way people have been trying for at least half a century. This Kurzweilian concept that Moore's law will allow us to brute-force the creation of a digital mind any day now is wacky.