Is this because AI LLMs don't do anything good or useful? They get very simple questions wrong, will fabricate nonsense out of thin air, and even at their most useful they're a conversational version of a Google search. I haven't seen a single thing they do that a person would need or want.
Maybe it could be neat in some kind of procedurally generated video game? But even that would be worse than something written by human writers. What is an LLM even for?
I think there are legitimate uses for this tech, but they're pretty niche and difficult to monetize in practice. For most jobs, correctness matters, and if the system can't be guaranteed to produce reasonably correct results then it's not really improving productivity in a meaningful way.
I find this stuff is great in cases where you already have domain knowledge, and maybe you want to bounce ideas off and the output it generates can stimulate an idea in your head. Whether it understands what it's outputting really doesn't matter in this scenario. It also works reasonably well as a coding assistant, where it can generate code that points you in the right direction, and it can be faster to do that than googling.
We'll probably see some niches where LLMs can be pretty helpful, but their capabilities are incredibly oversold at the moment.
We might eventually get to a point where LLMs are a useful conversational user interface for systems that are actually intrinsically useful, like expert systems, but it will still be hard to justify their energy cost for such a trivial benefit.
The costs of operation aren't intrinsic though. There is a lot of progress in bringing computational costs down already, and I imagine we'll see a lot more of that happening going forward. Here's one example of a new technique resulting in cost reductions of over 85% https://lmsys.org/blog/2024-07-01-routellm/
I've been thinking AI generated dialogue in Animal Crossing would be an improvement over the 2020 game.
To clarify I'm not wanting the writers at the animal crossing factory to be replaced with ChatGPT. Having conversations that are generated in real time in addition to the animals' normal dialogue just sounds like fun. Also I want them to be catty again because I like drama.
Nah, something about AI dialogue is just soulless and dull. Instantly uninteresting. Same reason I don't read the AI slop being published in ebooks. It has no authorial intent and no personality. It isn't even trying to entertain me. It's worse than reading marketing emails because at least those have a purpose.
It depends on the training data. Once you use all data available, you get the most average output possible. If you limit your training data you can partially avoid the soullessness, but it's more unhinged and buggy.
The LLM characters will send you on a quest, and then you'll go do it, and then you'll come back and they won't know you did it and won't be able to give you a reward, because the game doesn't know the LLM made up a quest, and doesn't have a way to detect that you completed the thing that was made up.
Cory Doctorow has a good write-up on the reverse centaur problem and why there's no foreseeable way that LLMs could be profitable. Because of the way they're error-prone, LLMs are really only suited to low-stakes uses, and there are lots of low-stakes, low-value uses people have found for them. But they need high-value use-cases to be profitable, and all of the high-value use-cases anyone has identified for them are also high-stakes.
Is this because AI LLMs don't do anything good or useful? They get very simple questions wrong, will fabricate nonsense out of thin air, and even at their most useful they're a conversational version of a Google search. I haven't seen a single thing they do that a person would need or want.
Maybe it could be neat in some kind of procedurally generated video game? But even that would be worse than something written by human writers. What is an LLM even for?
deleted by creator
I think there are legitimate uses for this tech, but they're pretty niche and difficult to monetize in practice. For most jobs, correctness matters, and if the system can't be guaranteed to produce reasonably correct results then it's not really improving productivity in a meaningful way.
I find this stuff is great in cases where you already have domain knowledge, and maybe you want to bounce ideas off and the output it generates can stimulate an idea in your head. Whether it understands what it's outputting really doesn't matter in this scenario. It also works reasonably well as a coding assistant, where it can generate code that points you in the right direction, and it can be faster to do that than googling.
We'll probably see some niches where LLMs can be pretty helpful, but their capabilities are incredibly oversold at the moment.
AI is great for asking questions, not answering them
We might eventually get to a point where LLMs are a useful conversational user interface for systems that are actually intrinsically useful, like expert systems, but it will still be hard to justify their energy cost for such a trivial benefit.
The costs of operation aren't intrinsic though. There is a lot of progress in bringing computational costs down already, and I imagine we'll see a lot more of that happening going forward. Here's one example of a new technique resulting in cost reductions of over 85% https://lmsys.org/blog/2024-07-01-routellm/
I've been thinking AI generated dialogue in Animal Crossing would be an improvement over the 2020 game.
To clarify I'm not wanting the writers at the animal crossing factory to be replaced with ChatGPT. Having conversations that are generated in real time in addition to the animals' normal dialogue just sounds like fun. Also I want them to be catty again because I like drama.
Nah, something about AI dialogue is just soulless and dull. Instantly uninteresting. Same reason I don't read the AI slop being published in ebooks. It has no authorial intent and no personality. It isn't even trying to entertain me. It's worse than reading marketing emails because at least those have a purpose.
It depends on the training data. Once you use all data available, you get the most average output possible. If you limit your training data you can partially avoid the soullessness, but it's more unhinged and buggy.
Make the villagers petty assholes like the original game and RETVRN the crabby personality type and it would be an improvement.
Is there a single LLM you can’t game into apologizing for saying something factual then correcting itself with a completely made up result lol
The LLM characters will send you on a quest, and then you'll go do it, and then you'll come back and they won't know you did it and won't be able to give you a reward, because the game doesn't know the LLM made up a quest, and doesn't have a way to detect that you completed the thing that was made up.
Cory Doctorow has a good write-up on the reverse centaur problem and why there's no foreseeable way that LLMs could be profitable. Because of the way they're error-prone, LLMs are really only suited to low-stakes uses, and there are lots of low-stakes, low-value uses people have found for them. But they need high-value use-cases to be profitable, and all of the high-value use-cases anyone has identified for them are also high-stakes.
Thank you. This is a good article. Are there any good book length things I could read on this topic?
I do not know. Perhaps Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell.