Google says it's temporarily suspended the ability of Gemini, its flagship generative AI suite of models, to generate images of people while it works on Google says it's temporarily suspended the ability of Gemini, its flagship generative AI suite of models, to generate images of people while it works on updating the model to improve the historical accuracy of outputs.
Pretty sure these tools are often seeded with prompts that enforce diversity. Bing does the same or similar. I'm more amused by this, as the process isn't aware and can't actively enable or disable these settings.
To actively fit a historical prompt, it would need to not only consider images from the period, but also properly synthesize historical data to go with the prompt.
That would require some kind of machine capable of learning, a model of language so incredibly large that it can comprehend these linguistic nuances, or an intelligent form of artificial device.
Wonder if we'll ever have something like that in the future.
There's a Sci-fi horror book I enjoyed, called "John Dies At The End", that posits an alternative history in which computers were created from the brains of pigs.
As a consequence, the civilization is heavily invested in harvesting organs in the same way that we're invested in drilling for oil.
Yes, I saw some talk and a screenshot somewhere that showed that apparently in its current state, Gemini can (or could) be asked to output the prompt enhancements it used along with the generated images.
The screenshot showed someone asking for images of fruit, and the enhanced prompt included "racially diverse groups of people". Now if they're inserting something like that even for images containing no people at, it stands to reason that this is just a default enhancement they ALWAYS apply, no matter the prompt, which would explain the racially diverse Nazis (and all the other brouhahahas we've seen from them).
Pretty sure these tools are often seeded with prompts that enforce diversity. Bing does the same or similar. I'm more amused by this, as the process isn't aware and can't actively enable or disable these settings.
To actively fit a historical prompt, it would need to not only consider images from the period, but also properly synthesize historical data to go with the prompt.
That would require some kind of machine capable of learning, a model of language so incredibly large that it can comprehend these linguistic nuances, or an intelligent form of artificial device.
Wonder if we'll ever have something like that in the future.
I mean, we ourselves are just electronic meat machines (with millions of years worth of fine-tuning).
I'm sure that'll happen at some point in the future, if we manage to not destroy ourselves and/or the planet by then.
There's a Sci-fi horror book I enjoyed, called "John Dies At The End", that posits an alternative history in which computers were created from the brains of pigs.
As a consequence, the civilization is heavily invested in harvesting organs in the same way that we're invested in drilling for oil.
Yes, I saw some talk and a screenshot somewhere that showed that apparently in its current state, Gemini can (or could) be asked to output the prompt enhancements it used along with the generated images.
The screenshot showed someone asking for images of fruit, and the enhanced prompt included "racially diverse groups of people". Now if they're inserting something like that even for images containing no people at, it stands to reason that this is just a default enhancement they ALWAYS apply, no matter the prompt, which would explain the racially diverse Nazis (and all the other brouhahahas we've seen from them).
That's really what I'm expecting. My guess is that the training data is skewed, and the prompt cannot adjust.
Either the machine will need to understand what is expected, or the company will need to address this and allow people to enable or disable diversity.
The first option may be impossible to attain at this stage. The second can lead to inappropriate images.