Projects like Arubis use a web based proof of work to slow down and and potential stop not traffic. The idea is that the proof of work makes the client spend some computer resources accessing the page so that it isn’t as computationally feasible to abuse public websites.

However, doing this all as a web service seems inefficient since there is always a performance penalty tied to web pages. My idea is what there could a special http protocol addition that would require the client to do a proof of work. Doing it at the browser/scaper level means that it would be Mich more efficient since the developer of the browser could tailor the code to the platform. It would also make it possible for bots to do it which would still allow scrapping but in a way that is less demanding on the server.

  • Possibly linux@lemmy.zipOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 days ago

    All I’m talking about is a simple behind the scenes proof of work. It doesn’t need to be fancy and could implemented with just a little extra code. The user visits a page and the browser does a proof of work at the request of the server.

      • Possibly linux@lemmy.zipOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 days ago

        This is why I think it would be a good idea to leverage browser level code. Doing this in JavaScript is just not very efficient and if you use web assembly you are giving mobile users a disadvantage.

        I’ve learned to dislike Arubis as it makes the mobile browsing experience awful

        • moonpiedumplings@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          I switched to fennec and it’s basically instant. Fennec also gets ublock origin, a much better adblocker. But I’ve been too lazy to switch before this.

    • CameronDev@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      5 days ago

      Its never just “A little extra code”. Each browser will have to implement it themselves (although possibly it could be done in chrome and everyone else inherits it by default), each browser will run the features through the standard debates around support, necessity, correctness, side channel security issues, etc. Firefox might drag their feet, chrome might implement it differently, edge might strip it out because it hurts their scraper. 5 years later it might get useful.