This almost seems like a good idea... if unicode weren't already shaky enough.
UTF-8 is, honestly, pretty amazing. It lets you do things like compose latin-character text, and then interpose words like 𰻞.
That's 'biáng', which is, to my understanding, a kind of Chinese noodle dish. It's apparently the most complex Chinese character, comprising more than 50 strokes. (https://www.compart.com/en/unicode/U+30EDE).
In hex it's encoded as: 0xF0 0xB0 0xBB 0x9E
So, yeah, only 8 bytes to describe a character that looks like white noise to me unless I zoom WAY in on it! (My vision's getting pretty bad, tbh. I need it to be about the size it shows up on compart.com to make out the individual radical characters.)
If you were to count strokes on 'biáng', you end up with 5 bytes to encode 11 pen strokes or 2.2 strokes per byte. At 8 bytes to 57 pen strokes, the information density goes up to 7.125 strokes per byte.
So in Latin characters provided by UTF-8, you end up with very similar storage requirements. To encode the much more complex character, you get more than 3 times the information density.
I REGRET buying an nvidia adapter when I had the opportunity to buy an AMD/Radeon adapter.
During the pandemic, I purchased an GeForce GTX 1650. It's an older, Turing hardware-based card, so you'd think the driver support would be pretty mature, right? It has been NOTHING but problems.
On nouveau, it's stable, but 3d acceleration just doesn't work right. Under the nvidia open source driver, it corrupts the screen after boot and locks up entirely second later. Under the proprietary driver, it freezes on boot a good amount of the time.
Now, once I get it booted, it's solid as a rock. I've gotta crank the engine over five or six times every time I DO boot, though. If I had it to do over again, I'd definitely have stuck with AMD.