/ Posts / Dark Corners of the Web Offer a Glimpse at A.I.’s Nefarious Future

Dark Corners of the Web Offer a Glimpse at A.I.’s Nefarious Future



In the obscure and often overlooked corners of the internet, a troubling phenomenon is unfolding, foretelling a dark future for artificial intelligence. On platforms like 4chan, a notorious hub of anonymity, A.I. tools are falling into the wrong hands, leading to a surge in harassment and racist propaganda.

A stark example of this was observed in October during a Louisiana parole board meeting. Here, a doctor with extensive mental health expertise was testifying in a case concerning a convicted murderer. While the parole board was intently focused on the proceedings, so were others, but with malintent.

A group of online trolls, lurking in the digital shadows, captured screenshots of the doctor’s testimony from the online broadcast. Using advanced A.I. tools, they manipulated these images to portray her inappropriately and then disseminated these falsified images on 4chan. This website, known for its inclination towards harassment and the propagation of hate and conspiracy theories, became the breeding ground for this digital atrocity.

This was not an isolated incident. 

Daniel Siegel, a Columbia University graduate student researching the malicious exploitation of A.I., observed multiple instances where 4chan users employed A.I.-powered image and audio editing tools to create and spread offensive content about individuals appearing before the parole board.

Fortunately, the reach of these manipulated materials has been largely confined within 4chan's boundaries. However, experts monitoring these fringe platforms warn that these activities are precursors to more widespread and intensified online harassment and hate campaigns, fueled by sophisticated A.I. tools, in the near future.

Callum Hood, from the Center for Countering Digital Hate, notes that fringe sites like 4chan serve as early indicators of how emerging technologies might be misused to amplify extremist ideologies. These platforms are often frequented by young, tech-savvy individuals who are quick to adopt and adapt new technologies like A.I. to further their ideologies into mainstream online spaces. What starts as a trend on these fringe platforms often finds its way to more popular social networks.

Several troubling uses of A.I. tools on 4chan have come to light:

1. Artificial Images and A.I.-Generated Pornography

New A.I. image generators, more advanced than earlier tools like Dall-E and Midjourney, are now being used to create fake, explicit content. These tools can manipulate existing images in deeply troubling ways. "They can use A.I. to create exactly what they want," Hood explains, highlighting the potential for hate and misinformation campaigns.

This has left entities like the Louisiana parole board in a quandary, as there is no federal law specifically banning the creation of such fake images. While the board has opened an investigation following Siegel’s findings, the legal path forward remains uncertain.

Some states, like Illinois, California, Virginia, and New York, have taken steps to address this by expanding laws against nonconsensual A.I.-created pornography, allowing victims to sue creators or distributors.

2. Voice Cloning

Another alarming development is the misuse of A.I. tools for voice cloning. ElevenLabs, an A.I. company, developed a tool capable of creating convincing digital replicas of any voice, which can be made to say anything typed into the program. This tool was quickly abused on 4chan, with fake clips of celebrities being manipulated to spread offensive content.

Siegel’s investigation, using an A.I. voice identifier from ElevenLabs, revealed that 4chan users had created fake audio clips of judges from the Louisiana parole board proceedings, making racist and offensive remarks about defendants. Despite ElevenLabs’ attempts to impose restrictions on its tool, the spread of A.I.-created voices continues unabated, with social media platforms like TikTok and YouTube struggling to contain the proliferation of such content.

In response, some major social media companies have begun requiring labels on A.I. content. Additionally, President Biden issued an executive order directing companies to label A.I. content and for the Commerce Department to develop watermarking and authentication standards.

3. Custom A.I. Tools

Meta, aiming to advance in the A.I. race, adopted an open-source approach, releasing its software code to researchers. This strategy, while beneficial for academic and technological development, also poses risks. When Meta’s large language model, Llama, was released, its code was quickly leaked onto 4chan. Users on the platform modified the code, removing safety features to create chatbots that could generate antisemitic and other offensive content.

This situation illustrates the double-edged sword of open-source A.I. tools: while they can accelerate development and innovation, they also open the door for misuse by those with enough technical know-how. This has led to the development of language models echoing far-right ideologies and image generators that bypass controls set by larger technology companies to produce inappropriate content.


In summary, the unfolding story on platforms like 4chan presents a chilling view into the potential future of A.I. technology when wielded irresponsibly. It underscores the urgency for both legal and technological solutions to address these emerging challenges. As A.I. continues to evolve and integrate into our digital lives, the balance between innovation and responsible use becomes increasingly critical. The need for robust legal frameworks, ethical guidelines, and proactive measures from tech companies is paramount to ensure that A.I. serves to enhance, not harm, our society.


Become a part of digital history

Create post

Comments . 0

No comments yet. Be the first to add a comment!