They’re bad folks, everybody knows it, everybody says it. But they’re simultaneously better and worse than you think, because apparently nobody actually knows how these products actually work. (I’m mostly going to be discussing google home, because that’s what I’m familiar with; I assume echos use similar technology, but I can’t speak for them).

So to get started, no, your smart speaker isn’t always listening. At a hardware level, there’s two boards and a tiny bit of cache. Your speaker is constantly listening for the trigger words, and processing about 2 seconds of audio stored on the cache at a time. However, this is all done at a local level on the first board. Only once it recognizes the trigger words does it establish a connection to the cloud, and use that to process your request. Once your request is complete, it goes back into standby mode. You can look at the packets coming out of the device, and see that it only connects to the internet when it needs to. The onboard cache is small, and constantly being overwritten, so there’s literally no way for it to constantly be monitoring you, by design.

However, what IS nefarious is the amount of permissions you have to give google in regards to the data it does capture. Of course, they use the captured audio for expected things like training their voice recognition AI, but you also give them permission to store all that data indefinitely, with metadata tracing it back to you, AND it’s not off limits to engineers.

That’s right, there’s the possibility, however small, that real people will be listening whenever you ask google to play your erotic jazz playlist. Once that audio is on the cloud, you basically don’t own it anymore, and google can do whatever they want with it.

So should you be worried? If you want to be, I guess. I resigned to the fact that I lost all my digital privacy before I was even born, and will happily tell google to turn off my lights while laying in bed like a fat sack of shit, but it comes down to what you’re comfortable with. Either way, I just want people to actually understand what they are and how they work, because there is a lot to criticize, so it pays to be criticizing the right things.

  • FailsonSimulator2020 [any]
    ·
    4 years ago

    The fact that it doesn’t upload all the time under normal circumstances does not change the fact that it could be made to upload with relatively little effort if they wanted to. There’s fundamentally nothing preventing that behavior from being changed from an external program connected through the internet.

    • TheJoker [he/him]
      hexagon
      arrow-down
      1
      ·
      4 years ago

      That’s not wrong, but again you can just look at the packets. As soon as google does do this you’ll know about it, because it’s gonna be a big fuckin news story.

      • the_river_cass [she/her]
        ·
        4 years ago

        you're assuming it gets turned on universally rather than targeted at undesirables via an NSL from the state.

        • TheJoker [he/him]
          hexagon
          ·
          4 years ago

          Again though, the data being uploaded by the device can be monitored locally. If you’re paranoid, monitor the data. Yes, it can happen, but also there’s no need to fear technology just because it’s technology. It’s not all skynet out there.

          • the_river_cass [she/her]
            ·
            4 years ago

            obviously. but the closed firmware and closed source make it hard to validate what's really happening on the device so you have to rely on other systems to monitor it. and once you're doing that, you're talking about something that requires serious technical knowhow to actually operate, more to interpret the data you get back. but these devices are sold as convenience packages to laypeople.

            if your threat model includes the state (as everyone here presumably does), you have two choices. saddle yourself with a bunch of work to maintain vigilance over devices in your home that, strictly speaking, aren't necessary -- assuming that you have the knowledge and skills to hold that vigilance in the first place -- or you choose technologies that are easier to validate and support an actual base of trust.

          • the_river_cass [she/her]
            arrow-down
            1
            ·
            4 years ago

            no? most political activists don't have that kind of technical training?