https://futurism.com/the-byte/government-ai-worse-summarizing

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that's the case, then the purported upsides of using the technology — cost-cutting and time-saving — are seriously called into question.

  • QuillcrestFalconer [he/him]
    ·
    3 months ago

    They bury the lede in the article thought. They used llamma 2 - 70B which is not a great model

    • UlyssesT
      ·
      edit-2
      21 days ago

      deleted by creator

        • UlyssesT
          ·
          edit-2
          21 days ago

          deleted by creator

            • UlyssesT
              ·
              edit-2
              21 days ago

              deleted by creator

                • UlyssesT
                  ·
                  edit-2
                  21 days ago

                  deleted by creator

                        • BodyBySisyphus [he/him]
                          ·
                          3 months ago

                          I've been thinking about this comment a lot over the last couple of days. I do my research in agriculture and food systems so I've had a lot of exposure to the "future is rural" philosophy, but it's mainly in the context of climate change. It seems like anyone talking sense about the trajectory our society is on is quietly buying small plots of land for smallholder agriculture or posting about how farms are probably going to stop supplying food systems and start focusing on meeting their own needs as conditions get less hospitable. It's interesting to consider that there's a convergent response emerging as a result of automation.

                          Meanwhile I'm sitting here on my small expensive urban plot that couldn't sustain more than some summer vegetables because I thought I'd get bored doing actual agriculture blob-no-thoughts

                    • UlyssesT
                      ·
                      edit-2
                      21 days ago

                      deleted by creator

                        • UlyssesT
                          ·
                          edit-2
                          21 days ago

                          deleted by creator

                            • UlyssesT
                              ·
                              edit-2
                              21 days ago

                              deleted by creator

                                • UlyssesT
                                  ·
                                  edit-2
                                  21 days ago

                                  deleted by creator

                                    • UlyssesT
                                      ·
                                      edit-2
                                      21 days ago

                                      deleted by creator

                  • soupermen [none/use name]
                    ·
                    edit-2
                    3 months ago

                    Hey there, I've got no stakes here and I don't want to speak for anyone but I think what happened here was QuillCrestFalconer and DPRK_Chopra were simply pointing out that the technology is rapidly evolving, that it's capabilities even just a couple years ago were way less than now, and it appears that it will continue to develop like this. So their point would be that we need to still prepare and anticipate that it may soon advance to the point where employers will be more willing to try to replace real workers with it. I don't think they were implying that this would be a good thing, or that it would be a smart or savvy move, just that it's a possible and maybe even a likely outcome. We've already seen various industries attempt to start doing that with the limited abilities of "AI" already so to me it does seem reasonable to expect them to want to do that more as it gets better. Okay, thanks for reading. 👋

                    • UlyssesT
                      ·
                      edit-2
                      21 days ago

                      deleted by creator

                      • soupermen [none/use name]
                        ·
                        3 months ago

                        Okay. I am under no illusion that current technology is anywhere near replicating digital brains. I don't think that's what QuillcrestFalconer or DPRK_Chopra were saying either. When we say "replace workers" we mean "replace the functions that those workers do for their employers". We're not talking about making a copy of your coworker Bob, but making a program that does many of the tasks that are currently assigned to Bob in a manner that isn't too much worse than the real guy (from the warped perspective of management and shareholders of course), and anything the machine can't do can be delegated to someone else who gets paid a pittance. That's what we're talking about, nothing about recreating human intellects. I put the term AI in scare quotes in my first comment because I too am well aware that it's a misnomer. But it's the term that everyone knows this technology by (via marketing and such like you said) so it's easy fall back on that term. LLM, or "AI" in scare quotes, I don't think the specific term really matters in this context because we're not talking about true intelligence, but automation of task work that currently is done by paid human employees.

                        • UlyssesT
                          ·
                          edit-2
                          21 days ago

                          deleted by creator

                  • impartial_fanboy [he/him]
                    ·
                    3 months ago

                    Maybe stop ignoring entire fields of research that, to this date, are still figuring out what biological brains are doing and how they are doing them instead of just nodding along to what you already want to believe from people that have blinders for anything outside of their field (computers, in this case).

                    Well first, brains aren't the only kind of intelligent biological system but they aren't actually trying to 1 for 1 recreate the human brain, or any other brain for that matter, that's just marketing. The generative side of LLM's is what gets the focus in the media but it's really not the most scientifically interesting or what will actually change that much all things considered.

                    These systems are absolutely fantastic at finding real patterns in chaotic systems. That's where the potential lies.

                    It's like if people were trying to develop rocketry to achieve space travel, but you and yours were smugly stating that this particularly sharp knife will cut the heavens open, just you wait.

                    More like trying to go to the moon with a Civil War era rocket, it is early days yet. But progress is insanely quick.

                    • UlyssesT
                      ·
                      edit-2
                      21 days ago

                      deleted by creator

    • Hexboare [they/them]
      ·
      3 months ago

      What's the model that does work with this use case?

      (I don't think there is one)