At some level, the computer is going to have to handle everything. Otherwise, you're just playing table top.
But in the same way you can marginally improve graphics, you can programmatically improve story.
That does require craftsmanship. It can't just be throwing prompts at ChatGPT lazy. But you can outsource things like fluid dialogue and ambient chatter to an AI to flesh out the background of a game in a real way.
Enough so that its not just a soldier doing a hallway patrol in a loop at all hours. Or a monster popping into existence and then getting baited into an easy ambush over and over and over again.
dialogue is the only thing we're talking about here, procedural generation and game AI sophistification are increasing through labor of human devs and hardware capabilities.
and i'm seeing extremely limited utility for AI text generation if you're gonna be approving all the lines & handing them off to voice actors anyway. we dont have a million chatter lines because they aint paying a writer to spend time on it, which they still would in that case
approving all the lines & handing them off to voice actors anyway.
Nah, you'd use a voice synthesizer to do it on the fly. That'd be the whole point of it, being able to come up with novel responses based on teh question asked.
You need to remember that these LLM things aren't smart, they don't think, they aren't capable of reflection or semantic understanding. All it's doing is generating strings of text based on weighted probabilities. That's fine as long as you keep things very superficial, but if you try to get any depth out of it the gaps are going to become glaringly obvious very quickly.
You know how the "art" models can't draw hands? If you try to use these things for more than they're capable of you're going to get that kind of unnerving uncanny valley fuck up in your narrative. They can't draw hands because they're not capable of abstraction. They don't have an abstract understanding that a 'hand' is an object with certain characteristics no matter what position it's in. It can't do hands because they appear in so many different shapes in so many different contexts that it can't identify a consistent enough pattern to reproduce the hand. Faces are relatively consistent and easy, but anything that moves around too much - hair, hands, really almost any detailed object that's not a face, it can only draw very superficial approximations of. Jewelry is a good tell because there are so often pieces that don't connect to anything, or pieces that flow in to flesh or clothing.
It can’t do hands because they appear in so many different shapes in so many different contexts that it can’t identify a consistent enough pattern to reproduce the hand.
There are a lot of simple end runs around this glitch, even before the upgrades and refinements. Just requesting hands appear under an apron or behind the back, for instance.
Professional artists do shit like this all the time. There's a whole rant online about how Rob Liefeld can't draw feet. He's been churning out comics the old fashioned way (by stenciling over images he's not great at drawing) for decades.
All that is to say, we're not asking AI to work miracles here. Just to improve over what is rudimentary and generic digital conversations in a way programs like Replika have already proven successful at.
Yes, if you start too hard at the edges, you'll lose the illusion. But we're not trying to outwit Harrison Ford in Blade Runner, here. Humans are very easy to fool when they want to be fooled.
'radiant' generation already makes up a large part of skyrim & later games' content. AI is like a hat on a hat.
you want depth, you've gotta do it by hand. you want shallow, keep it simple and use the tools we already have.
I am once again asking you to stop giving Valve ideas
i am so sorry
:communism-will-win:
🧢
:ushanka:
🎩
:whywhywhywhywhy:
Ha-ha! You are as PRESUMPTUOUS as you are POOR and IRISH
At some level, the computer is going to have to handle everything. Otherwise, you're just playing table top.
But in the same way you can marginally improve graphics, you can programmatically improve story.
That does require craftsmanship. It can't just be throwing prompts at ChatGPT lazy. But you can outsource things like fluid dialogue and ambient chatter to an AI to flesh out the background of a game in a real way.
Enough so that its not just a soldier doing a hallway patrol in a loop at all hours. Or a monster popping into existence and then getting baited into an easy ambush over and over and over again.
dialogue is the only thing we're talking about here, procedural generation and game AI sophistification are increasing through labor of human devs and hardware capabilities.
and i'm seeing extremely limited utility for AI text generation if you're gonna be approving all the lines & handing them off to voice actors anyway. we dont have a million chatter lines because they aint paying a writer to spend time on it, which they still would in that case
Nah, you'd use a voice synthesizer to do it on the fly. That'd be the whole point of it, being able to come up with novel responses based on teh question asked.
deleted by creator
You need to remember that these LLM things aren't smart, they don't think, they aren't capable of reflection or semantic understanding. All it's doing is generating strings of text based on weighted probabilities. That's fine as long as you keep things very superficial, but if you try to get any depth out of it the gaps are going to become glaringly obvious very quickly.
You know how the "art" models can't draw hands? If you try to use these things for more than they're capable of you're going to get that kind of unnerving uncanny valley fuck up in your narrative. They can't draw hands because they're not capable of abstraction. They don't have an abstract understanding that a 'hand' is an object with certain characteristics no matter what position it's in. It can't do hands because they appear in so many different shapes in so many different contexts that it can't identify a consistent enough pattern to reproduce the hand. Faces are relatively consistent and easy, but anything that moves around too much - hair, hands, really almost any detailed object that's not a face, it can only draw very superficial approximations of. Jewelry is a good tell because there are so often pieces that don't connect to anything, or pieces that flow in to flesh or clothing.
The latest version of Midjourney is now able to draw hands with a fat higher consistency and quality. It only took them a year to polish that bit.
Case in point
There are a lot of simple end runs around this glitch, even before the upgrades and refinements. Just requesting hands appear under an apron or behind the back, for instance.
Professional artists do shit like this all the time. There's a whole rant online about how Rob Liefeld can't draw feet. He's been churning out comics the old fashioned way (by stenciling over images he's not great at drawing) for decades.
All that is to say, we're not asking AI to work miracles here. Just to improve over what is rudimentary and generic digital conversations in a way programs like Replika have already proven successful at.
Yes, if you start too hard at the edges, you'll lose the illusion. But we're not trying to outwit Harrison Ford in Blade Runner, here. Humans are very easy to fool when they want to be fooled.