I saw Generative AI for Beginners from Microsoft on GitHub. I've looked at https://fmhy.pages.dev/ai but I'm not sure what I'm really looking for.

I write fiction, and I want a chatbot that will function like chat gpt3.5, but not shut down if things get bloody or sexy, as they so often do.

You know ready, aim, fire? I'm in the AIM stage.

  • db0@lemmy.dbzer0.com
    ·
    9 months ago

    Check out the ai horde. https://aihorde.net or direct llm frontend at https://lite.koboldai.net. Free Foss crowdsourced with uncensored models that won't ever be rugpulled

  • turkishdelight@lemmy.ml
    ·
    9 months ago

    ollama helps you to easily run llms locally: https://ollama.com/

    I'm running llama2-uncensored on my laptop with 8GB of memory.

  • Vampire [any]
    ·
    9 months ago

    There's a reddit forum called local llama

  • wathek@discuss.online
    ·
    edit-2
    9 months ago

    I would look into NovelAI for writing, it's quite specifically for that. It's a paid servicd similar to chatgpt, but it's uncensored and private.

    You can run your own lightweight LLM on a laptop but the output will be useless. Good output requires big boy compute.

    If you do want to run it on your own hardware, look into Ollama. There's also options to run your own LLM in the cloud with a not too difficult process for non-techies.

    Frankly, id find the right LLM for your needs and just pay for it per month, maybe novelai, maybe something else, but chatgpt is not great for creative fiction.

    • Melatonin@lemmy.dbzer0.com
      hexagon
      ·
      9 months ago

      I got a little TOO MUCH involvement from NovelAI. I guess I want suggestion help, idea spitball help, but it's specialized what I'm looking for.

      I want my ai to stay on the shelf with my thesaurus until I'm ready to use it.

      • wathek@discuss.online
        ·
        8 months ago

        Interesting, im vaguely interested in this too. i have half of a world written that i want to turn into a game maybe (probably not but, amhaving fun) I have the hardware to turn what i have into an embedding for an open model, and the hardware to run it. So that's the way i would go about it, though i can't advocate for how helpful it would be (yet)