• Luffy@lemmy.ml
    link
    fedilink
    arrow-up
    24
    ·
    3 days ago

    TL;DR: The pentester already found it himself, and wanted to test how offen GPT finds it if he pasts that code into it

    • 8uurg@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      Not quite, though. In the blogpost the pentester notes that it found a similar issue (that he overlooked) that occurred elsewhere, in the logoff handler, which the pentester noted and verified when spitting through a number of the reports it generated. Additionally, the pentester noted that the fix it supplied accounted for (and documented) a issue that it accounted for, that his own suggested fix for the issue was (still) susceptible to. This shows that it could be(come) a new tool that allows us to identify issues that are not found with techniques like fuzzing and can even be overlooked by a pentester actively searching for them, never mind a kernel programmer.

      Now, these models generate a ton of false positives, which make the signal-to-noise ratio still much higher than what would be preferred. But the fact that a language model can locate and identify these issues at all, even if sporadically, is already orders of magnitude more than what I would have expected initially. I would have expected it to only hallucinate issues, not finding anything that is remotely like an actual security issue. Much like the spam the curl project is experiencing.

      • Luffy@lemmy.ml
        link
        fedilink
        arrow-up
        8
        ·
        3 days ago

        Yes, but:

        To get to this point, OpenAI had to suck up almost all data ever generated in the world. So in order for it to become better, lets say it has to have 3 times as much data. That alone would take more than 3 Lifetimes to get the data alone, IF we don´t consider the AI slop and assume that all data is still Human made, which is just not true.

        In other words: What you describe will just about never happen anymore, at least as long as 2025 will still be remembered

        • 8uurg@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          Yes, true, but that is assuming:

          1. Any potential future improvement solely comes from ingesting more useful data.
          2. That the amount of data produced is not ever increasing (even excluding AI slop).
          3. No (new) techniques that makes it more efficient in terms of data required to train are published or engineered.
          4. No (new) techniques that improve reliability are used, e.g. by specializing it for code auditing specifically.

          What the author of the blogpost has shown is that it can find useful issues even now. If you apply this to a codebase, have a human categorize issues by real / fake, and train the thing to make it more likely to generate real issues and less likely to generate false positives, it could still be improved specifically for this application. That does not require nearly as much data as general improvements.

          While I agree that improvements are not a given, I wouldn’t assume that it could never happen anymore. Despite these companies effectively exhausting all of the text on the internet, currently improvements are still being made left-right-and-center. If the many billions they are spending improve these models such that we have a fancy new tool for ensuring our software is more safe and secure: great! If it ends up being an endless money pit, and nothing ever comes from it, oh well. I’ll just wait-and-see which of the two will be the case.