• OgdenTO [he/him]
    ·
    edit-2
    4 months ago

    I don't think these "it's not feasible" arguments make sense or are in good faith. It's not a dichotomy between "it's not happening at all" or "they are recording or streaming everything we're saying to the worst natural language models".

    Maybe they're doing it only on certain triggers. Maybe they are listening only for certain keywords. Maybe they record and listen to 10 seconds of audio everytime you get a message on your phone? Maybe here using really low quality recordings. Maybe they've slowed down the processing to run in the background as there's no requirement to do real time NLP.

    Like there's a huge range of potential options (with new, energy efficient optimized ML algorithms) between 0 and 100.

    • quarrk [he/him]
      ·
      4 months ago

      Copying my other comment

      Google Pixels have long had a Now Playing feature that can identify songs playing nearby. As far as I know, it’s all on-device and offline. There are pre-loaded hashes of a bunch of common songs which can then be compared to ambient sounds.

      So there could be similar on-device functionality that recognizes spoken trigger words and record them to some file which can then be accessed by apps that serve ads.

      This would be different from (and more efficient than) recording large audio files and sending them over the internet.