Wikimedia has seen a 50 percent increase in bandwidth used for downloading multimedia content since January 2024 due to AI crawlers taking its content to train generative AI models. It has to find a way to address the problem, because it could slow down actual readers' access to its pages and assets.
Why would an AI company hire someone when they can just tell an ai prompt to write a script to download wikipedia, and run it without even checking?
A man of our times!