https://nitter.1d4.us/MKBHD/status/1668048839675092992

  • Frank [he/him, he/him]
    ·
    edit-2
    1 year ago

    I don’t think chatgpt is alive but I do think that at some point the systems underlying things like it will grow sufficiently complex that you could call it conscious and, again, like I said, have to resort to some real questionable arguments to say that biological life is different

    People are already failing the turing test against chatgpt, but as it stands it doesn't do anything like thinking. All it can do is produce series of letters based on statistical weights in it's training model. There's no awareness, no context, no abstraction, no cognitive process what so ever. It can't reflect, it can't evaluate truth values, it can't perform any cognitive processes at all. People talk about lying and hallucination, but both of those terms are deeply misleading and fundamentally misunderstand what the models are doing. It can't lie - it has no theory of mind. It has no awareness of itself, let alone anything else. It can't hallucinate either. It's not answering questions incorrectly or giving you the wrong information. It's not answering your questions at all. It's comparing the sequence of letters in your prompt to sequences of letters in it's training set and producing a statistically weighted sequence of letters based on that training set. There's no abstraction, there's no manipulation of concepts. There's no cognition. That's why the image plagiarism generators can't draw hands, or really any complex object except faces - hands come in so many different shapes that the model weighs the colors and shapes of fingers, but lacking abstraction it cannot recognize that all of those shapes represent the same conceptual object. It doesn't know that most hands have five fingers because it doesn't know what hands or fingers are. It doesn't know anything. It's a math problem that compares numerical values and produces outputs statistically similar to the input prompt.

    Personally i think trying to make thinking computers is stupid bazinga stuff. We already have fully functioning self aware machines capable of all kinds of extremely complex functions we mostly don't understand yet. We should work with what we've got instead of trying to recreate one of the most complex emergent systems in observable reality when we don't even understand it's basic operating principles. Augmenting existing brains, to me at least, makes a lot more sense than fussing around stratching crude drawings on rocks.