• r00ty@kbin.life
    link
    fedilink
    arrow-up
    13
    ·
    10 hours ago

    If you’re running nginx I am using the following:

    if ($http_user_agent ~* "SemrushBot|Semrush|AhrefsBot|MJ12bot|YandexBot|YandexImages|MegaIndex.ru|BLEXbot|BLEXBot|ZoominfoBot|YaK|VelenPublicWebCrawler|SentiBot|Vagabondo|SEOkicks|SEOkicks-Robot|mtbot/1.1.0i|SeznamBot|DotBot|Cliqzbot|coccocbot|python|Scrap|SiteCheck-sitecrawl|MauiBot|Java|GumGum|Clickagy|AspiegelBot|Yandex|TkBot|CCBot|Qwantify|MBCrawler|serpstatbot|AwarioSmartBot|Semantici|ScholarBot|proximic|MojeekBot|GrapeshotCrawler|IAScrawler|linkdexbot|contxbot|PlurkBot|PaperLiBot|BomboraBot|Leikibot|weborama-fetcher|NTENTbot|Screaming Frog SEO Spider|admantx-usaspb|Eyeotabot|VoluumDSP-content-bot|SirdataBot|adbeat_bot|TTD-Content|admantx|Nimbostratus-Bot|Mail.RU_Bot|Quantcastboti|Onespot-ScraperBot|Taboolabot|Baidu|Jobboerse|VoilaBot|Sogou|Jyxobot|Exabot|ZGrab|Proximi|Sosospider|Accoona|aiHitBot|Genieo|BecomeBot|ConveraCrawler|NerdyBot|OutclicksBot|findlinks|JikeSpider|Gigabot|CatchBot|Huaweisymantecspider|Offline Explorer|SiteSnagger|TeleportPro|WebCopier|WebReaper|WebStripper|WebZIP|Xaldon_WebSpider|BackDoorBot|AITCSRoboti|Arachnophilia|BackRub|BlowFishi|perl|CherryPicker|CyberSpyder|EmailCollector|Foobot|GetURL|httplib|HTTrack|LinkScan|Openbot|Snooper|SuperBot|URLSpiderPro|MAZBot|EchoboxBot|SerendeputyBot|LivelapBot|linkfluence.com|TweetmemeBot|LinkisBot|CrowdTanglebot|ClaudeBot|Bytespider|ImagesiftBot|Barkrowler|DataForSeoBo|Amazonbot|facebookexternalhit|meta-externalagent|FriendlyCrawler|GoogleOther|PetalBot|Applebot") { return 403; }

    That will block those that actually use recognisable user agents. I add any I find as I go on. It will catch a lot!

    I also have a huuuuuge IP based block list (generated by adding all ranges returned from looking up the following AS numbers):

    AS45102 (Alibaba cloud) AS136907 (Huawei SG) AS132203 (Tencent) AS32934 (Facebook)

    Since these guys run or have run bots that impersonate real browser agents.

    There are various tools online to return prefix/ip lists for an autonomous system number.

    I put both into a single file and include it into my web site config files.

    EDIT: Just to add, keeping on top of this is a full time job!

    • ctag@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      Thank you for the detailed reply.

      keeping on top of this is a full time job!

      I guess that’s why I’m interested in a tooling based solution. My selfhosting is small-fry junk, but a lot of others like me are hosting entire fedi communities or larger websites.

      • r00ty@kbin.life
        link
        fedilink
        arrow-up
        4
        ·
        7 hours ago

        Yeah, I probably should look to see if there’s any good plugins that do this on some community submission basis. Because yes, it’s a pain to keep up with whatever trick they’re doing next.

        And unlike web crawlers that generally check a url here and there, AI bots absolutely rip through your sites like something rabid.

        • Admiral Patrick@dubvee.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          AI bots absolutely rip through your sites like something rabid.

          SemrushBot being the most rabid from my experience. Just will not take “fuck off” as an answer.

          That looks pretty much like how I’m doing it, also as an include for each virtual host. The only difference is I don’t even bother with a 403. I just use Nginx’s 444 “response” to immediately close the connection.

          Are you doing the IP blocks also in Nginx or lower at the firewall level? Currently I’m doing it at firewall level since many of those will also attempt SSH brute forces (good luck since I only use keys, but still…)

          • r00ty@kbin.life
            link
            fedilink
            arrow-up
            3
            ·
            7 hours ago

            So on my mbin instance, it’s on cloudflare. So I filter the AS numbers there. Don’t even reach my server.

            On the sites that aren’t behind cloudflare. Yep it’s on the nginx level. I did consider firewall level. Maybe just make a specific chain for it. But since I was blocking at the nginx level I just did it there for now. I mean it keeps them off the content, but yes it does tell them there’s a website there to leech if they change their tactics for example.

            You need to block the whole ASN too. Those that are using chrome/firefox UAs change IP every 5 minutes from a random other one in their huuuuuge pools.

  • Deckweiss@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    3 hours ago

    The only way I can think of is blacklisting everything by default, directing to a challanging proper captcha (can be selfhosted) and temporarily whitelisting proven human IPs.

    When you try to “enumerate badness” and block all AI useragents and IP ranges, you’ll always leave some new ones through and you’ll never be done with adding them.

    Only allow proven humans.


    A captcha will inconvenience the users. If you just want to make it worse for the crawlers, let them spend compute ressources through something like https://altcha.org/ (which would still allow them to crawl your site, but make DDoSing very expensive) or AI honeypots.

  • drkt@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 hours ago

    I am currently watching several malicious crawlers be stuck in a 404 hole I created. Check it out yourself at https://drkt.eu/asdfasd

    I respond to all 404s with a 200 and then serve them that page full of juicy bot targets. A lot of bots can’t get out of it and I’m hoping that the driveby bots that look for login pages simply mark it (because it responded with 200 instead of 404) so a real human has to go and check and waste their time.

    • Daniel Quinn@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      This is pretty slick, but doesn’t this just mean the bots hammer your server looping forever? How much processing do you do of those forms for example?

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    6
    ·
    14 hours ago

    If I’m reading your link right, they are using user agents. Granted there’s a lot. Maybe you could whitelist user agents you approve of? Or one of the commenters had a list that you could block. Nginx would be able to handle that.

    • ctag@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 hours ago

      Thank you for the reply, but at least one commenter claims they’ll impersonate Chrome UAs.

          • ctag@lemmy.sdf.orgOP
            link
            fedilink
            English
            arrow-up
            6
            ·
            7 hours ago

            In the hackernews comments for that geraspora link people discussed websites shutting down due to hosting costs, which may be attributed in part to the overly aggressive crawling. So maybe it’s just a different form of DDOS than we’re used to.

  • dudeami0@lemmy.dudeami.win
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 hours ago

    The only way I can think of is require users to authenticate themselves, but this isn’t much of a hurdle.

    To get into the details of it, what do you define as an AI bot? Are you worried about scrappers grabbing the contents of you website? What is the activities of an “AI Bot”. Are you worried about AI bots registering and using your platform?

    The real answer is not even cloudflare will fully defend you from this. If anything cloudflare is just making sure they get paid for access to your website by AI scappers. As someone who has worked around bot protections (albeit in a different context than web scrapping), it’s a game of cat and mouse. If you or some company you hire are not actively working against automated access, you lose as the other side is active.

    Just think of your point that they are using residential IP addresses. How do they get these addresses? They provide addons/extensions for browsers that offer some service (generally free VPNs) in exchange for access to your PC and therefore your internet in the contract you agree to. The same can be used by any addon, and if the addon has permissions to read any website they can scrape those websites using legit users for whatever purposes they want. The recent exposure of the Honey scam highlights this, as it’s very easy to get users to install addons by selling users they might save a small amount of money (or make money for other programs). There will be users who are compromised by addons/extensions or even just viruses that will be able to extract the data you are trying to protect.

    • DaGeek247@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      3 hours ago

      Just think of your point that they are using residential IP addresses. How do they get these addresses?

      You can ping all of the ipv4 addresses in under an hour. If all you’re looking for is publicly available words written by people, you only have to poke port 80 and then suddenly you have practically every possible small self-hosted website out there.

      • dudeami0@lemmy.dudeami.win
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        52 minutes ago

        When I say residential IP addresses, I mostly mean proxies using residential IPs, which allow scrappers to mask themselves as organic traffic.

        Edit: Your point stands on there are a lot of services without these protections in place, but a lot of services are protective against scrapping.

    • ctag@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      Thank you for the detailed response. It’s disheartening to consider the traffic is coming from ‘real’ browsers/IPs, but that actually makes a lot of sense.

      I’m coming at this from the angle of AI bots ingesting a website over and over to obsessively look for new content.

      My understanding is there are two reasons to try blocking this: to protect bandwidth from aggressive crawling, or to protect the page contents from AI ingestion. I think the former is doable, and the latter is an unwinnable task. My personal reason is because I’m an AI curmudgeon, I’d rather spend CPU resources blocking bots than serving any content to them.