WhyEssEff [she/her]

I do the emotes lea-caramelldansen

  • 1.02K Posts
  • 6.94K Comments
Joined 4 years ago
cake
Cake day: July 25th, 2020

help-circle
  • I had the opposite thing where the second the horrors started ramping up again (also in 2021) I felt like I needed to do my part in promoting a social/cultural 'out' from Zionism for Jews in my community who've been raised on it. Like, try to help further mainstream a Judaism that regards Zionism as fundamentally disgusting, so other conscientious Jews can pull the escape hatch more easily. The indoctrination runs deep and the rot even deeper in my experience, entire Jewish organizations that present otherwise innocuously are built primarily to promote and rationalize the existence of Israel. Shit sucks and it suckers in libs hard because it's so baked into the institutions. I've personally helped steward along dozens of people in that process, especially at my former summer camp, and one of those people is very actively helping in facilitating organization on the ground for Palestine, which is very much an honor.

    I want to further embrace my own Jewish heritage in order to promote to other Jews that it's fine to be Jewish whilst disavowing this Hitlerian statecraft project that leeches onto Jewish identity. There's a chunk of other non-religious Jews want to further embrace theirs to snuff out uncomfortable feelings they refuse to reckon with, which I reckon she kinda falls into from what you are saying. we-are-not-the-same



  • As a data science undergrad, knowing generally how they work, LLMs are fundamentally not built in a way that could achieve a measure of consciousness.

    Large language models are probability-centric models. They essentially look at a graph node of "given my one quintillion sentences and one quadrillion paragraphs on hand, which word is probably next given the current chain of output and the given input." This makes it really good at making something that is voiced coherently. However, this is not reasoning–this is parroting – it's a chain of dice rolls that's weighted to all writing ever to create something that reads like a good output against the words of the input.

    The entire idea behind prompt engineering is that these models cannot achieve internal reasoning, and thus you have to trick it into speaking around itself in order to write out the lines of logic that it could reference in its own model.

    I do not think AGI or whatever they're calling Star Trek-tier AI will arise out of LLMs and transformer models. I think it is fundamentally folly. I think what I see as fundamental elements of consciousness are just not covered at all by it (such as subjectivity) or are something I just find sorely lacking even despite the advances in development (such as cognition). Call me a cynic, I just truly think it's not going to come out of genAI (as we generally understand the technology behind it for the past couple years) and further research into it.