One use of LLMs that I haven't seen mentioned before is to use them as a sounding board for your own ideas. By discussing your concept with an LLM, you can gain fresh perspectives through its generated responses.

In this context, the LLM's actual comprehension is irrelevant. The purpose lies in its ability to spark new thought processes by prompting you with unexpected framings or questions.

Definitely recommend trying this trick next time you're writing something.

  • lurkerlady [she/her]
    ·
    edit-2
    4 months ago

    automatic1111 webui launcher, its stable diffusion. fun fact its icon is a pic of ho chi minh

    if you wait, stable diffusion 3 is coming out soon. nvidia will run faster because its tensors are better unfortunately. SD is more ethical than others, you can load up models that are trained only on public art and pics

    • FuckBigTech347@lemmygrad.ml
      ·
      4 months ago

      I'm pretty sure I tried that one but it kept running out of VRAM. Also it utilizes proprietary AMD/NVidia software stacks which are a pain to set up. GPT4ALL is a lot better in that regard, they just use Vulkan compute shaders to run the models.

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
        hexagon
        ·
        4 months ago

        There's also ComfyUI, but the learning curve is a bit steeper https://github.com/comfyanonymous/ComfyUI

        although there's CushyStudio frontend for it that's more user friendly https://github.com/rvion/CushyStudio

        • FuckBigTech347@lemmygrad.ml
          ·
          4 months ago

          ComfyUI seems like the most promising but it also uses ROCm/CUDA which don't officially support any of my current GPUs (models load successfully but midway through computing it fails). Why can't everyone just use compute shaders lol.