I don’t count productivity or coding as a net good for what it’s worth. Fun, sure, but it’s truly incredible how the influx of AI hasn’t been used to create a more efficient way to distribute resources irl

Tech bros are a scourge on humanity and I truly wish for nothing but the worst for them

  • @monobot@lemmy.ml
    hexbear
    5
    1 month ago

    Looks like in the last year AI means "ChatGPT". Which, for me, is just a bit better spell checker and search engine.

    People are surprised that they can "communicate" with it, but it is not something that will change the world.

    On the other hand Machine Learning and AI are being used all the time and are making jobs more efficient. Examples are computer vision for sorting apples, or machine learning for detecting frauds, counting anything on the drone or satellite images, detecting fires, finding ships illegally hunting wales.

    Every industry is using it more and more and is making peoples jobs better, it is not replacing people, but it is doing something we couldn't do before. Example plant count in agriculture, no one was doing it before, but is easy now and helping everyone involved from farmers to sales and helps in optimisation.

    All this happens because of hardware improvement, nothing else.

    • booty [he/him]
      hexbear
      18
      1 month ago

      a bit better spell checker and search engine.

      it's like a search engine except it completely makes up nonsense instead of showing you real things

      • Chronicon [they/them]
        hexbear
        11
        edit-2
        1 month ago

        and it came about just after real search engines stopped getting better and started getting worse lol. a real search engine, indexing solely content created by humans, is leagues better than a vibes based search engine that is often feeding on the outputs of other automation and "AI" systems.

        So far in the workplace I've mostly heard of it making things worse not better too but I'm sure some people are liking it. Whenever someone I work with breaks out chatgpt they end up with something they don't understand but sounds good on first blush, often revealing deeper flaws only later.

        But, if you count non-LLM/GPT machine learning then fine, that stuff has found useful applications, though some of it is more flawed than anyone would like to admit, and I do not think it should be solely trusted to do a lot of important tasks (screening resumes, detecting fraud/plagiarism, etc.) without at least having a lot of failsafes and human oversight where needed.