I say less than five years for it to be possible for it to generate that at all, over five years for consumer hardware to do it in under a minute, with the caveat that it'd be rubbish. You could probably synthesize parts of that tech right now with clever enough fake solutions (like the VR video doesn't need to be prerendered, so it just needs to generate props and generic animations to render on the fly with traditional rasterization or something; you could probably make a set of generic parts to procedurally generate from and train a machine learning model to put them together to match a description, and use some GPT-3 sort of thing to write a script to pull descriptions from), although again it would be absolute garbage.
I feel like to make that sort of generation of whole works from simple prompts work it'd need to have additional layers of filtering and curation to pick out and remove common problems with AI generation.
In fact, I can envision a toolkit for just that, where it just organizes every layer of outputs to dissect the generated work and allows the user to edit, regenerate, w/e individual components, pick from multiple variations, etc. Then you just need a thorough logging and data harvesting system in that editor and you can use it to train a filtering AI to enable fully automated generation by just letting an AI operate AI tools designed for humans.
Can you imagine it? Endless, apocalyptic tides of low-grade slop the likes of which not even rubbish slop factories like RPGMaker and Poser have managed in the past, created by AIs, curated by AIs, and consumed by literally no one because it's the worst thing ever.
I say less than five years for it to be possible for it to generate that at all, over five years for consumer hardware to do it in under a minute, with the caveat that it'd be rubbish. You could probably synthesize parts of that tech right now with clever enough fake solutions (like the VR video doesn't need to be prerendered, so it just needs to generate props and generic animations to render on the fly with traditional rasterization or something; you could probably make a set of generic parts to procedurally generate from and train a machine learning model to put them together to match a description, and use some GPT-3 sort of thing to write a script to pull descriptions from), although again it would be absolute garbage.
I feel like to make that sort of generation of whole works from simple prompts work it'd need to have additional layers of filtering and curation to pick out and remove common problems with AI generation.
In fact, I can envision a toolkit for just that, where it just organizes every layer of outputs to dissect the generated work and allows the user to edit, regenerate, w/e individual components, pick from multiple variations, etc. Then you just need a thorough logging and data harvesting system in that editor and you can use it to train a filtering AI to enable fully automated generation by just letting an AI operate AI tools designed for humans.
Can you imagine it? Endless, apocalyptic tides of low-grade slop the likes of which not even rubbish slop factories like RPGMaker and Poser have managed in the past, created by AIs, curated by AIs, and consumed by literally no one because it's the worst thing ever.