Here's a decent video covering the project, basically just giving commentary while playing the demos. AUTO-GPT: Autonomous GPT-4! Mini AGI is HERE! - YouTube

GitHub - Torantulino/Auto-GPT: An experimental open-source attempt to make GPT-4 fully autonomous.

Future is gonna be wild. Anyone keeping up with the LLaMA based models coming out? Basically, Facebook released a LLM called LLaMA and in the past few weeks a number of groups realized they could skip the long arduous process of compiling training data and instead use the OpenAI API to just ask ChatGTP questions, save the answers, and then train the LLama model on the ChatGPT data, all for less than a grand. And once trained it can run locally on your home computer. Not as high level as GPT4, but it's still pretty impressive... but also it's just propagating the same lib standards of ChatGPT. BUT BUT, projects like gtp4all did release their training data. So it would be possible for someone to edit it to be a bit more radical. :cyber-lenin:

  • doublepepperoni [none/use name]
    ·
    2 years ago

    And once trained it can run locally on your home computer.

    As long as it's got a high-end Nvidia graphics card and tons of RAM. :sicko-wistful:

    Hopefully they can make these run on modest PCs eventually, I had fun generating random things with the free chatgpt site that was linked here a while back.

      • invalidusernamelol [he/him]
        ·
        edit-2
        2 years ago

        The 7B model can even run on a mid-range phone. It just needs 4Gigs of ram

        Rendering a raytraced scene in blender can easily top out at 10x that so it's not that intense. Even Chrome and Firefox frequently hit 4Gigs when you have several tabs open