• mayo_cider [he/him]
    ·
    1 year ago

    The only areas of machine learning that I expect to live up to the hype are in areas, where somewhat noisy input and output doesn't ruin the usability, like image and audio processing and generation, or where you have to validate the output anyway, like the automated copy-paste from stackexchange. Anything that requires actual specifity and factuality straight from the output, like the language models attempting to replace search engines (or worse, professional analysis), will for the foreseeable future be tainted with hallucinations and misinformation.

  • fckreddit@lemmy.ml
    ·
    1 year ago

    The reached the right end pretty quickly. One of the reasons I gave up on ML rather fast. Hyperparameter tuning is really, really random.

  • lemmonade@lemm.ee
    ·
    edit-2
    1 year ago

    There is truth in this, but it isn't as true as some people seem to think. it's true that trial and error is a real part of working in ml, but it isn't just luck whether something works or not. We do know why some models work better than others for many tasks, there are some cases in which some mannual hyperparameter tuning is good, there was a lot of progress in the last 50 years, and so on.