Consider https://arstechnica.com/robots.txt or https://www.nytimes.com/robots.txt and how they block all the stupid AI models from being able to scrape for free.
Consider https://arstechnica.com/robots.txt or https://www.nytimes.com/robots.txt and how they block all the stupid AI models from being able to scrape for free.
The robots.txt construct is completely voluntary and some bots use it to specially target content.
In my opinion, anyone relying on this to protect their content has no business publishing anything online.
See: https://en.m.wikipedia.org/wiki/Robots.txt
We will sue their unauthorized use in the marketplace of ideas.
Of course it's voluntary, but if entities like OpenAI say they will respect it then presumably they really will.
Couple of things:
I agree with your points 2-4 but I have observed on my own website that the crawlers who don't respect won't, and the crawlers who do respect will.
How did you find this information? I know how to check traffic for my website, but idk how to get from "list of IPs" to "these ones are crawlers"
apologies if this is a silly question
I used to sit and monitor my server access logs. You can tell by the access patterns. Many of the well-behaved bots announce themselves in their user agents, so you can see when they're on. I could see them crawl the main body of my website, but not go to a subdomain, which is clearly linked from the homepage but is disallowed from my robots.txt.
On the other hand, spammy bots that are trying to attack you will often instead have access patterns that try to probe your website for common configurations for common CMSes like WordPress. They don't tend to crawl.
Google also provides a tool to test robots.txt, for example.
Perhaps this will help your understanding of my first point.
https://gizmodo.com/former-openai-board-member-sam-altman-chatgpt-1851506252
deleted by creator
Eh, will they really? It'd be pretty hard to prove they didn't respect it.
deleted by creator
Can it work as a way to manifest unconsenting juridically?
It's not about relying on it, it's about changing the behaviour of web crawlers that respect 'em, which, as someone who has adminned a couple scarily popular sites over the years, is a surprisingly high percentage of them.
If someone wants to get around it, they obviously can, but this is true of basically all protective measures ever. Doesn't make them pointless.