Don’t enter this in ChatGPT. You’ll be quickly cut off
OpenAI, the provider of the popular ChatGPT, announced on Friday that it had removed the accounts of an Iranian group for creating content aimed at, among other things, influencing the presidential election, Reuters reported.
Big brother is watching
The account ban is the result of an investigation conducted by OpenAI in cooperation with Microsoft. The Iranian operation, codenamed Storm-2035 consisted of generating both short comments and entire articles using GPT Chat, took up topics such as the US presidential candidates, LGBTQ rights, the Gaza conflict, and Israel’s presence at the Olympic Games in a highly polarized way.
The content generated in this way was then shared via websites and social media. However, the scale of the problem was not large and most of the identified posts in social media had a small number of interactions. Also, the articles prepared by the group did not achieve large reach and shares on the network.
Operation Storm-2025 is currently experiencing technical difficulties
OperAI has preventively blocked Iranian accounts, and those behind Operation Storm-2025 must now look for other tools. It is difficult to say, however, whether OpenAI’s systems will be able to track the criminals’ eventual return to the platform with new accounts.
The way OpenAI processes user data is a separate issue. Because if you can track what content Iranian fake news creators are preparing, then each of us could at some point become a victim of the policy of the only correct version of content.
