TIL: Stackoverflow allows all crawlers (by accident)
I’d expect Stackoverflow to
âś… allow all search crawlers
🛑 block all LLM training material crawlers
when you check: all crawlers allowed (!)
but that’s not the whole story
they do have a robots.txt w/ blocks (!)
the robots.txt says:
🛑 do not crawl any pages (“/”)
it even says explicitly:
🛑 not for search
🛑 not for ai training
and it applies to any user agent, any crawler (“*”)
but why does the crawler check then say everything is green?
the robots.txt is served w/ status 418
418 originated as a joke status code:
I’m a teapot
but that status code is in the 400-499 range (!)
and the RFC for robots.txt (9309)
says that if the robots.txt is served with a status in the 400-499 range
a crawler may access any resources
as if there is no robots.txt at all
and that’s why crawler check currently turns up green for Stackoverflow
not sure what is right for Stackoverflow
nor if they have robots.txt set up like they intended
(I’d expect them to block crawlers for ai training but allow crawlers for search)
but it shows:
getting robots.txt right can be quite tricky




