Show

  • GaveUp [love/loves]
    ·
    edit-2
    9 hours ago

    but I would hope it wouldn't make stuff up especially since the training set was only internal

    I use an internal LLM at one of the biggest tech companies and it makes shit up all the time lol

    • FloridaBoi [he/him]
      ·
      6 hours ago

      Jfc. Like who do you blame here? The model for being stupid, the prompter for not validating and if they’re validating then are there any time savings?