The Take It Down Act, purportedly designed to combat non-consensual intimate imagery (NCII), including AI-generated deepfakes, raises significant free speech concerns. While addressing the serious issue of revenge porn, the bill’s vague language and lack of safeguards could allow powerful individuals and entities to suppress legitimate speech, including investigative journalism, political commentary, and satire. Critics argue that the legislation transforms the internet into a legal minefield, enabling censorship based on accusations alone, potentially leading to selective enforcement and chilling effects on encrypted communications, ultimately benefiting those with the resources to control their public image rather than protecting victims of online exploitation.
Editor’s Note: Censorship is a tricky thing. On one hand, governments want to protect those who may become victims of cybercrime. On the other hand, they risk civil liberties. There is never a straightforward solution to what should be censored and what shouldn’t, as the internet has become a huge web that is so much larger than countries.
But then again, why do governments want to control the narrative online when there are so many other ways to protect people? Many find it hard to distinguish deepfakes only because they have been trained to “believe what they see”. What if governments focused instead on teaching students how to think critically? What if they taught people how to think holistically? What if people were taught how good internet hygiene?
Censorship is not just lazy “solution”. It is also counterproductive because it does not solve anything.
Read Original Article
Read Online
Click the button below if you wish to read the article on the website where it was originally published.
Read Offline
Click the button below if you wish to read the article offline.