Is there anything good to read that compares Dreyfus' critique of AI and the new technological developments in "AI"?

Do contemporary researchers even bother to answer his criticisms anymore? Is anyone writing philosophically informed critiques of LLMs as "AI"? Do "AI" researchers even bother trying to respond to the history of philosophy and consciousness?

Edit: Has anyone read Negarestani's Intelligence and Spirit?

  • FumpyAer [any, comrade/them]
    ·
    edit-2
    11 months ago

    My dad had me try out Bing GPT-4. I asked a question about a very famous artist's compositions. It constantly mixed up biographical details from that famous artist with another famous composer with a similar name, said things that were wrong (claimed he wrote a bunch of songs that he recorded (but didn't write) and basically didn't answer the question until I corrected each mistake individually. Some of the mistakes I knew were wrong already, while others I had to verify myself. All together, it took me 30 minutes to get an answer that I still don't know for sure is correct.

    Basically, I trust LLMs less than I would trust a 3rd grader who is plagiarizing Wikipedia.

    • Parsani [love/loves, comrade/them]
      hexagon
      ·
      edit-2
      11 months ago

      Yeah, that is what my experience has been at times. Though at other times, the info the llm spit out was essentially accurate. In neither case does the LLM understand what it is saying though in the way a human does.

      However, my question isn't about whether these programs are good at what they purport to be, but how ML/LLM projects conceptualize their relation to philosophy of consciousness, and (artificial) intelligence. And I don't mean in the tech blogger way, but in a way which engages with historical ideas around what intelligence and consciousness is (or even the difference between the two) and through that the problems/limitations on creating something which could actually be called intelligent or conscious. That's why I'm curious whether any of the AI researchers today have responded to Dreyfus as he wrote his work before the new ML/LLM systems.

      • dat_math [they/them]
        ·
        edit-2
        11 months ago

        how ML/LLM projects conceptualize their relation to philosophy of consciousness

        They generally don't because hard neuroscience has yet to elucidate enough on the mechanics of consciousness that anybody can confidently say this thing that VERY loosely resembles a machine we like to call "attention" that has computational analogues in (insofar as we can conclude from primate visual behavior) but is in many ways extremely different from the circuitry of mammalian brains, and speculating on philosophy without a sound foundation of evidence demonstrating its material connection to the computational systems in question is a great way to get your paper rejected by the first round of review at any serious journal.

        I know a lot of stem nerds, but precious few who read any serious (non-scientific) philosophy seriously.

        • Parsani [love/loves, comrade/them]
          hexagon
          ·
          edit-2
          11 months ago

          Seems like a terrible blindspot to ignore the centuries of philosophy trying to conceptualize this issue even without the hard neuroscientific data to back it in any concrete way (if that is ever even possible). Though STEMs aversion to philosophy isn't unusual.

          Do they simply look for a purely mechanical account of consciousness that is removed from any environment? Do social relations in the production of (self) consciousness, identity and/or intelligence ever figure into it? How do AI researchers conceptualize AI/intelligence/consciousness/etc., or do they even try outside of finding the right combination of light switches? I guess I'm also asking, how the fuck do they even know what they are looking for without a concept of what it is?

          I'm not in neuroscience or a related field, so I have little idea of what people are writing about this outside of the tech journalism drivel which is just marketing.

          Have you read any of Negarestani's Intelligence and Spirit? He seems to be trying to formulate a way to even begin to think about what a general (artificial) intelligence could be conceptually, through Hegel, Kant, and what I assume are a bunch of analytic and scientific writers I know little about tbh.

          • dat_math [they/them]
            ·
            edit-2
            11 months ago

            Sorry for taking so long to respond.

            Do social relations in the production of (self) consciousness, identity and/or intelligence ever figure into it?

            Yes! There is some serious work on theories of consciousness in neuroscience and it's hard to sift through the bullshit, but there are probably as many or more philosophers as there are experimental neuroscientists published in Neuroscience of Consciousness, some of which must discuss these dependencies. Unfortunately, when specifically computational neuroscience people start talking about consciousness it can be especially hard to tell if they're bullshitting because their models/theories tend to involve a lot of complicated maths (for example, Giulio Tononi's Integrated Information Theory) and they aren't always testable/falsifiable in the straightforward way a lot of computational neuroscience work is.

            I guess I'm also asking, how the fuck do they even know what they are looking for without a concept of what it is?

            They largely don't really know what they're looking for from the computational side and so I think the more prominent directions of research activity in the neuroscience of consciousness tend to approach the problem from the other direction by observing and perturbing circuits that are known to be performing some computation that is demonstrably important for producing some aspect of consciousness and then observing post-perturbation neurological activity and the environment or the organism's behavior

            Have you read any of Negarestani's Intelligence and Spirit? He seems to be trying to formulate a way to even begin to think about what a general (artificial) intelligence could be conceptually, through Hegel, Kant, and what I assume are a bunch of analytic and scientific writers I know little about tbh.

            I haven't, but I read a quick description that says Negarestani rejects the ubiquity of mind and the inevitable emergence/evolution of a superintelligence, so I'm interested in learning how they formulate and argue those rejections.

            • Parsani [love/loves, comrade/them]
              hexagon
              ·
              11 months ago

              Np, I appreciate the information.

              Thanks for the journal recommendation, I see a few articles which look interesting. I come at this problem from philosophy, namely phenomenology, which I know has gotten some attention from people working on theories of consciousness though maybe not so much in AI research...? That's why I was asking about Dreyfus.

              I've only started Negarestani's book, so I don't have much to say about it right now other than its interesting so far.

  • invalidusernamelol [he/him]
    ·
    11 months ago

    The basic structure of LLMs and neural networks are definitely controversial. Chomsky hates them because they totally go against his language theories.

    I think overall they do a good job of approximating the high level process of thought, but it's kinda like approximating pi as 4. Sure you'll get pretty close a lot of the time, but you can't really do much with that beyond approximation.

    The superstructure of neural networks is basically:

    Training Data (historical knowledge) -> Activation Layer (perception) -> Abstraction Layers (thought process, there can be lots of these) -> Output Layer (action)

    "Training" a network basically involves splitting the data in half and using half of it as input to fit the abstraction layers to a result (e.g. a cat picture is correctly identified) then using the other half for more training without it knowing the answer first.

    As you tweak those neuron activation weights and connections, you're meant to be simulating how neurons fire within the human brain in an incredibly simplified way.

    The modern LLM approach adds a backpropagation layer that allows activated neurons in lower level layers to tweak values up the chain which means it can essentially rewire itself depending on inputs.

    All of this has been implemented physically in the past and roughly works, but the training part could take weeks. Simulating it in code has reduced that time to hours.

    That being said the actual philosophical questions about thought haven't been approached very much in this field, and instead they're attempting to digitize the physical processes of neuron activity.

    • Parsani [love/loves, comrade/them]
      hexagon
      ·
      11 months ago

      Activation Layer (perception)

      I'll have to read more on how neural nets work, but I don't quite understand how this is analagous to perception. Perceptual experience seems to be a very important part of consciousness (at least from the philosophy I have read), but I don't see much about it in stuff about AI, instead, there is a lot of this:

      attempting to digitize the physical processes of neuron activity

      • invalidusernamelol [he/him]
        ·
        edit-2
        11 months ago

        The perception layer is just the input layer. It's not really any different than the other layers "physically", but that's the first point of contact for the network.

        Like with LLM chatbot implementations, that layer is just the input text (after being tokenized, which is just a fancy way of saying compressed). With image networks, it's also a tokenized text input.

        With the first machine learning systems, they literally used photo resistors pointed at a dot matrix display. All that really matters on that layer is that it's able to drive some sort of input state or gradient that is then propagated through the other layers of neurons.

        The concept of a neuron in Machine Learning is literally just a resistance. In the earliest physical implementations, they were potentiometers that you fiddled with until the input data after passing through the network returned a desired result (circle or square). With modern implementations, the resolution is a lot higher, so the output options are something like 100,000,000 or more "concepts" that come about by feeding known inputs and tweaking the dials (weights) until the desired output is returned. This is all done by brute force over the course of years with a smaller computer or months with a large computer.

        In the end the whole idea is to create a set of activation values that correctly route a high (or partially high) signal to an output neuron or neuron cluster.

        This whole thing is based on the idea that words form vectors in n-dimentional space as opposed to phonetic interpretations of language. Which is true as language is structured, but the way a LLM learns language isn't by sounds and context, but by compiling a n-dimentional matrix of all words and their semantic relationships.

        But basically, the way Machine Learning people treat perception is just as the physical gradients that activate senses. Like the hairs in your ear detecting pressure frequencies, or your rods and cones detecting light frequencies, or your skin detecting heat and pressure gradients.

        There's no real view of the "whole" in terms of perception, just the wiring of different measurement values into a dynamically rewired system that can be adjusted to reroute electrical potentials to specific areas that can then be given meaning.