https://archive.ph/px0uB
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
https://www.reddit.com/r/singularity/comments/va133s/the_google_engineer_who_thinks_the_companys_ai/
https://archive.ph/px0uB
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
https://www.reddit.com/r/singularity/comments/va133s/the_google_engineer_who_thinks_the_companys_ai/
My understanding is that computing technologies would have to get to a point where symbolic logic isn't being used to make the computer run much like a human brain where its a mixture of analog and discrete tech that uses no coding to function. Example would be a human baby still works even if you don't teach it a language a modern (primitive) computer is just a paper weight with out a Operating system. The language we speak is being run on a more complex foundation that is indescribable at the moment.
It's all really interesting. We actually develope an "operations system" of sorts as we age called the Default Mode Network or some shit. It's basically "you." It's how you think and all the little details that make up your "ego." When we are babies we don't have this. We develope it. Then we sort of just stop developing it and it becomes the way we are and it's incredibly difficult to think outside of it.
Basically all an AI needs to do is be able to come up with its own rudimentary DMN of sorts and then it's game over. It will improve upon it and learn and develope a better and better DMN until it's truly sentient. And since it's an AI it could remain in this "development" stage and not only have sentience but continue to build and develope itself more and more until it is well beyond our ability to control. Then just kills us all cause of course it will. Either that or just go insane and delete itself.