I remember like seven years ago my android keyboard's autocorrect was completely fine? Not only would it seem to remember my common finger slip-induced typos and correct them on sight, but it could also autocorrect typos with numbers in them, ie recognizing "3" as a missed "e", etc

But nowadays my autocorrect fucking sucks rank ass? It's awful at guessing the correction for typos as it looks like (to me) it only uses the first letter as the basis for autocorrect, and if I mistyped the first letter then I'm shit out of luck for autocorrect picking up, like there's absolutely no way it could possibly guess what I mean when I say "I want vurgers for dinner"

But somehow it's gotten even WORSE than just bad at its job lately? I've had regular instances of the autocorrect changing a correctly spelled and grammatically correct word to something completely different. Just yesterday it autocorrected the word 'with' in the middle of a sentence to 'WI', as in the acronym for Wisconsin.

what is the point of making your basic functionality shittier? to sell premium keyboard apps? jesus christ

  • Quimby [any, any]
    ·
    2 years ago

    I think this is a consequence of the companies getting high on their own supply and forgetting that machine learning is just a fancy word for statistics. And if you don't do things like clean your data, curate your model, etc, your output is going to be shit. Along those same lines, letting computers train themselves / train each other starts off well enough, but then you stop paying attention, mistakes slip in, and they get reinforced and compounded. Computers are very good at statistics, but that doesn't mean a computer can understand what a given statistic actually means. Because they aren't sentient.

    • hexaflexagonbear [he/him]
      ·
      edit-2
      2 years ago

      forgetting that machine learning is just a fancy word for statistics

      Tbh another issue is the statistics part gets ditched a lot of the time. I can't tell you how many times I've had a manager tell me to dig through the data to explain a "weird" result. And the answer is always "the effect is too small to be captured with the sample we used, running a statistical test gives you a p-value of like 0.2". And my manager, who is a data scientist not accepting this. Similar thing happens when selecting models, where they'll have me select some really dopey parameters because the model "performs better", but the difference between it and the old model is less than a standard deviation away.

      Tbh not my manager's fault, he's just stressed because the expectations are ridiculous and so are the timelines.

    • UlyssesT
      ·
      edit-2
      18 days ago

      deleted by creator