Projects like Arubis use a web based proof of work to slow down and and potential stop not traffic. The idea is that the proof of work makes the client spend some computer resources accessing the page so that it isn’t as computationally feasible to abuse public websites.

However, doing this all as a web service seems inefficient since there is always a performance penalty tied to web pages. My idea is what there could a special http protocol addition that would require the client to do a proof of work. Doing it at the browser/scaper level means that it would be Mich more efficient since the developer of the browser could tailor the code to the platform. It would also make it possible for bots to do it which would still allow scrapping but in a way that is less demanding on the server.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 days ago

    I want to say no purely based on the idea that server code being executed on a local browser will make sandbox escape or RCE bugs far more dangerous.

    If a site wants work does, they should pay for it with their hosting.

  • CameronDev@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    4 days ago

    Keep in mind that any browser side extension runs the risk of breaking access for smaller browsers, so it can’t be used until there is near 100% market saturation, which could take years.

    And you have to be very careful not to break accessibility for blind/etc users, who may have a non standard browser to begin with.

    It would be nice if the AI robots.txt thing could be legally enforced.

    • Possibly linux@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      All I’m talking about is a simple behind the scenes proof of work. It doesn’t need to be fancy and could implemented with just a little extra code. The user visits a page and the browser does a proof of work at the request of the server.

        • Possibly linux@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 days ago

          This is why I think it would be a good idea to leverage browser level code. Doing this in JavaScript is just not very efficient and if you use web assembly you are giving mobile users a disadvantage.

          I’ve learned to dislike Arubis as it makes the mobile browsing experience awful

          • moonpiedumplings@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            3 days ago

            I switched to fennec and it’s basically instant. Fennec also gets ublock origin, a much better adblocker. But I’ve been too lazy to switch before this.

      • CameronDev@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        4 days ago

        Its never just “A little extra code”. Each browser will have to implement it themselves (although possibly it could be done in chrome and everyone else inherits it by default), each browser will run the features through the standard debates around support, necessity, correctness, side channel security issues, etc. Firefox might drag their feet, chrome might implement it differently, edge might strip it out because it hurts their scraper. 5 years later it might get useful.