cross-posted from: https://programming.dev/post/8121843

~n (@nblr@chaos.social) writes:

This is fine...

"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."

[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?

  • Daxtron2@startrek.website
    ·
    edit-2
    9 months ago

    I think this is extremely important:

    Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities.

    Bad programmers + AI = bad code

    Good programmers + AI = good code

  • Cyclohexane@lemmy.ml
    ·
    edit-2
    9 months ago

    A worrying number of my colleagues use AI blindly. Like the kind where you just press tab and not even look. Those who look spend a second before moving on.

    They call me anti-AI, even though I've used chatGPT since day 1. Those LLMs are great tools, but I am just paranoid to use it in that manner. I rather it explain to me how to do the thing instead of doing the thing (at which it is even better).

    EDIT: Typo

      • Assian_Candor [comrade/them]
        ·
        edit-2
        9 months ago

        Is it really helpful / does it save a lot of time? I’m the worlds #1 LLM hater (don’t trust it and think it’s lazy) but if it’s a very good tool I might have to come around

        • ericjmorey@programming.dev
          hexagon
          ·
          9 months ago

          I haven't been using it much, so I don't know if I'm a good judge. But I see it as an oversized autosuggestion tool that sometimes feels like an annoying interuption but sometimes feels like it helped me mover faster without breaking my train of thought.

          By "it", I mean I've tried several different ways to have an integrated LLM assistant integrated into my dev environment, none of which I was initially satisfied with in terms of workflow. But that's kinda true for every change I've made to my dev environment and workflows. It takes me a while to settle on anything new.

          I recommend none in particular, but I recommend that you take time to at least check it out. They have potential.

    • pkill@programming.dev
      ·
      9 months ago

      Also one really good practice from pre-Copilot era still holds, that many new users of copilot, my past self included might forget: don't write a single line of code without knowing it's purpose. Another thing is that while it can save a lot of time on boilerplate, you need to stop and think whenever it's using your current buffer's contents to generate several lines of very similar code whether it wouldn't be wiser to extract the repetitive code into a method. Because while it's usually algorithmically correct, good design still remains largely up to humans.