Okay the image is messy but the snake coiling around the scales is actually a sick concept.
The last one is a cool concept, but pie charts are pretty useless lmao
My main concern with people making fun of such cases is about deficiencies of "AI" being harder to find/detect but obviously present.
Whenever someone publishes a proof of a system's limitations, the company behind it gets a test case to use to improve it. The next time we - the reasonable people arguing that cybernetic hallucinations aren't AI yet and are dangerous - try using such point, we would only get a reply of "oh yeah, but they've fixed it". Even people in IT often don't understand what they're dealing with, so the non-IT people may have even more difficulties...
Myself - I just boycott this rubbish. I've never tried any LLM and don't plan to, unless it's used to work with language, not knowledge.
Guess is the blue and yellow hexagons
*removed externally hosted image*