Recently, when I was looking at the Unreal Wiki section of Beyond Unreal, I wanted to visit the link for the archive of Gamespot's history of the development of Unreal, I got this message:
Check it out, just to be sure.
http://web.archive.org/web/20050321075123/http://www.gamespot.com/features/makeunreal/index.html
For some reason, sites have been employing robots.txt, and the Wayback Internet Archive can't allow users to look at the old pages because of robots.txt. Why? And most importantly, why is Gamespot even using robots.txt? They must know that it prevents Internet historians from seeing what their sites used to look like, or from finding articles that they don't have anymore.
Do the moderators know about this problem?
Page cannot be crawled or displayed due to robots.txt.
Check it out, just to be sure.
http://web.archive.org/web/20050321075123/http://www.gamespot.com/features/makeunreal/index.html
For some reason, sites have been employing robots.txt, and the Wayback Internet Archive can't allow users to look at the old pages because of robots.txt. Why? And most importantly, why is Gamespot even using robots.txt? They must know that it prevents Internet historians from seeing what their sites used to look like, or from finding articles that they don't have anymore.
Do the moderators know about this problem?