Japanese-style peanuts, also known as Japanese peanuts or cracker nuts (widely known in the Spanish-speaking world as cacahuates japoneses or maní japonés), are a type of snack food made from peanuts that are coated in a wheat flour dough and then fried or deep-fried. They come in a variety of different flavors. The Mexican version's recipe for the extra-crunchy shell has ingredients such as wheat flour, soy sauce, water, sugar, monosodium glutamate, and citric acid. The snacks are often sold in sealed bags, but can also be found in bulk containers
History
Japanese-style peanuts were created in Mexico during the 1940s by Japanese immigrant Yoshihei Nakatani, the father of Yoshio and Carlos Nakatani. He lost his job after the mother-of-pearl button factory he worked at, named El Nuevo Japón, was forced to close after its proprietor came under suspicion of being a spy for the Empire of Japan.
Nakatani had to find alternatives to provide for his family. He obtained a job at La Merced Market, where he initially sold Mexican candies called muéganos [es]. Later, he developed a new variety of fried snacks he named oranda that he named after the like-named fish. He also created a new version of a snack that reminded him of his homeland, mamekashi (seeds covered with a layer of flour with spices), that he adapted to Mexican tastes. Nakatani sold them in packages decorated with a geisha design made by his daughter Elvia. While his children tended to the family business, Nakatani and his wife Emma sold the snacks on local streets. Sales of the snacks were so successful that Nakatani was able to obtain his own stall at the market. With the help of Nakatani's son Armando, the family established their business under the brand Nipón in the 1950s; the name was registered as a trademark in 1977.
Nakatani never registered the patent for the snack. As a result, various competitors made their own versions of Japanese-style peanuts.
A Japanese version originated in Okinawa, called Takorina, has the image of a Mexican charro in the bag, and it is claimed to be called "Mexican-style peanuts", though the rumour has been disproven.
Megathreads and spaces to hang out:
- 📀 Come listen to music and Watch movies with your fellow Hexbears nerd, in Cy.tube
- 🔥 Read and talk about a current topics in the News Megathread
- ⚔ Come talk in the New Weekly PoC thread
- ✨ Talk with fellow Trans comrades in the New Weekly Trans thread
- 👊 Share your gains and goals with your comrades in the New Weekly Improvement thread
- 🧡 Disabled comm megathread
reminders:
- 💚 You nerds can join specific comms to see posts about all sorts of topics
- 💙 Hexbear’s algorithm prioritizes comments over upbears
- 💜 Sorting by new you nerd
- 🌈 If you ever want to make your own megathread, you can reserve a spot here nerd
- 🐶 Join the unofficial Hexbear-adjacent Mastodon instance toots.matapacos.dog
Links To Resources (Aid and Theory):
Aid:
Theory:
Me randomly writing a program last night that calculates the Shannon entropy of one of my posts and one of that nerd who runs the lemmy.ml typography comm's posts and an "ideal noise" source that I just imagined is noise that has equal probabilities of each symbol present in the uhhh combined alphabet of our posts occurring in order to """"prove"""" that, actually, my posts aren't noise unless their posts are also noise (without even getting into my "ideal noise" source) unlike the removal reason says, that I was gonna post before realizing this is way too online and there's no place I can post that where they would see it and that doesn't make me look uhhhh rabid
Even better, maybe I just should have taken some averages of multiple of the above types of text things and done that standard deviation thing idk, or maybe do N-gram analysis or whatever
Idk what definition of noise we're using here or how Shannon entropy relates to it other than you don't want too much or too little , they're also probably tech-brained so I might be able to abuse that computer-program-math-proof equivalence thing too
It was fun to write this program though
Edit: just realized that what I really need is a massive text corpus created by a bunch of different people mashing their keyboards :3
Btw I am accepting submissions for this keymashing thing under spoilers in the replies and I might use them not joking hehehehehe
Their definition of noise is stupid, any text long enough made of real words, meaningless or not, will tend to have the same k-mer distribution.
But for example the Bible versus The Capital will probably have distinct kmer distributions
Yeah they're using some reddit orange site definition of noise where noise is when you say anything that isn't written like an email to your boss or something lol
Good point tbh, idk is like for any system that could detect "noise" (whatever that means) you could find a input that is meaningless but passes the check. Fr, I guess they just want one of those NLP systems that literally tone-polices you and rejects your reply if it "seems" vaguely uncivil or passionate (read: interesting) to a ball of unfeeling floating point numbers, then I'd have to find some weird way to insult them that wasn't in the training dataset smh
Bit idea: type of guy who figures out formal semantics just to check all his social media replies and block them if someone was uncivil or disagreed with him before he sees it
I support all measures to bully that lemmy.ml typography mod, real fuckin dweeb there. That mod also considers all-emoji comments, a staple of the hexbear posting style, to be prima facie noise. I had one such comment, whose clear semantic content was "copyright violation is good and liberatory" removed as noise.