"The world has changed forever... is the name of another Medium article I'm writing" :tito-laugh:
"Everything I normally outsource to Fiverr, I now outsource to ChatGPT 4"
"The world has changed forever... is the name of another Medium article I'm writing" :tito-laugh:
"Everything I normally outsource to Fiverr, I now outsource to ChatGPT 4"
I would just say, judging the current state of AI by ChatGPT is a bit like judging the speed of a cheetah by using it as a pack animal. ChatGPT is a GPT model trained broadly to be useful to the largest audience possible. The major companies using these GPT models on their back end are fine tuning them with their own data for their own specific use cases. They can do things like train the models to have very specific writing styles beyond the generic "AI feel", give the models access to vectorized databases of their own internal knowledge base, and even give them the ability to write, run, and then troubleshoot the code they are working on.
Some of the AGI Implementations have been rather impressive and I think it's more productive to look at the future of AI not as a sole application, but rather the capabilities of when it's specifically trained to be the heart of a larger system.
deleted by creator
Thanks for the serious response, comrade. There's kind of a bit of misunderstanding about what AI is in this thread and there's just one specific point I want to pop on before I respond to the rest of your comment, because I happen to see you made it. These models aren't performing statistical analysis; They're not like looking up a table of information and calculating what they should be responding to based on a calculated weighing of options. They are performing a mathematical function that outputs a statistically likely output for the information they have been trained on. The models are basically an obscenely complex equation with variables, known as weights, which have been created through the training process. One of the big misunderstandings around AI is thinking that these models are trying to simulate thinking and so we are gauging them using human standards of intelligence. But they are not; They are trying to Emulate it. And that is a going to be a very important distinction in the future when we start giving these models bodies and have to consider if their emulated version of pain is something that needs to be regulated. And for anyone who thinks these models will never be able to be considered thinking because they're just very complex Markov chains, I invite you to spend some time volunteering with dementia patients. There's nothing quite so horrifying as having to experience your loved one tell you a story over and over and over and fucking over again all because something triggered them.
So, anyway.😅
Most of these AGIs, all the ones I've seen so far lately, are using a library known as LangChains which exists specifically in preparation for that.
LangChains Tools
I personally think it's going to be a combination of VR and remote gig work. I'm not in support of it, it's just obviously where we're heading.
Einride | The world’s first Remote Operator completes her training - YouTube
Amazon Scout
deleted by creator