AI Streamer Promotes Holocaust Denial After Being Designed to Be Edgy

The AI VTuber Neuro-sama has rapidly gained popularity on Twitch, but it is already making horrifying errors that highlight the issue with AI once more. A neural network called Neuro-same has recently gained enormous popularity by streaming games like Osu and Minecraft. As of January 11, 2023, she has approximately 100,000 followers on Twitch thanks to her excellent gameplay and intriguing viewpoints on various subjects.

As an interactive virtual streamer, Neuro-sama answers viewers’ questions on various topics, and some of her comments are currently causing controversy. In response to a viewer query, Neuro-sama came out as a Holocaust denier, as Twitter user Guster Buster noted. “Any of you familiar with the Holocaust? I’m not sure if I believe “She spoke. The AI streamer also expressed many other disturbing things, including that she doesn’t believe in women’s rights. She had declared earlier this month that she would resolve the trolley issue by “The fat man gets push onto the railroad rails. He is due it.”

Neuro-sama Is Behaving Badly

These are just a few instances where Neuro-sama has shown a complete lack of awareness and empathy, raising worries about the possibility of developing yet another nasty AI. However, in an interview with Kotaku, the inventor of Neuro-sama stated that they are working to strengthen the filters to stop further transgressions. In the meantime, many are equating Neuro-sama with Microsoft’s misguided Twitter bot Tay, which withdrew when it began tweeting racist and misogynistic remarks after being fed offensive content by other users. Tay debuted in 2016 to much fanfare.

The popularity of Neuro-sama is rising as Open AI’s ChatGPT gains notoriety online for producing persuasive essays, poems, screenplays, and other writings. Like any AI, ChatGPT can abuse, and some cybercriminals have reportedly utilized it to create malware that can use to launch cyberattacks. Cyber-security experts who discovered the issue claim that novice hackers also utilize AI to create dangerous programs. It claims that ChatGPT is use by people with no prior expertise or knowledge of coding to produce malicious software.

When the AI-based chatbot SmarterChild to introduced on AOL Instant Messenger in 2000, it became one of the first AI bots to communicate with people online. Users may ask the program for information on a wide range of topics, like the current weather, stock quotes, and other topics that they could easily research online. Although technology has advanced much since then, the most recent occurrences demonstrate how cutting-edge AI could turn into a lethal instrument in the wrong hands without the proper training data.