Consider https://arstechnica.com/robots.txt or https://www.nytimes.com/robots.txt and how they block all the stupid AI models from being able to scrape for free.

  • neo [he/him]
    hexagon
    ·
    edit-2
    1 month ago

    I used to sit and monitor my server access logs. You can tell by the access patterns. Many of the well-behaved bots announce themselves in their user agents, so you can see when they're on. I could see them crawl the main body of my website, but not go to a subdomain, which is clearly linked from the homepage but is disallowed from my robots.txt.

    On the other hand, spammy bots that are trying to attack you will often instead have access patterns that try to probe your website for common configurations for common CMSes like WordPress. They don't tend to crawl.

    Google also provides a tool to test robots.txt, for example.