US wants to know if a text or image was generated by ChatGPT or Midjourney
Several large companies working around generative artificial intelligence have made commitments to the White House. They want to develop a technology to identify the content generated by their tools. For the moment, the way it will work remains rather vague.
One of the great dangers of generative AI is that it can mislead citizens: this is the observation of all public administrations in the world. In addition, they are considering setting up obligations for companies marketing AI tools. In response to this, the latter are beginning to commit to greater transparency in the content generated. This is the case of OpenAI, Microsoft, Google, Meta, Amazon, Anthropic and Inflection. These seven companies signed a voluntary pledge with the Biden-Harris administration to “ the development of responsible AI “.
Large companies commit to more transparency in the content generated by their AIs
In a statement, the White House said these seven companies along with the Biden-Harris administration have signed an agreement voluntarily committing these first “to contribute to the safe, secure and transparent development of AI technology.“For the US government, “companies developing these emerging technologies have a responsibility to ensure that their products are safe.So three watchwords: safety, security and trust.

The idea of this agreement is to control the sharing of texts, images, videos and sounds generated by AIs. As OpenAI writes on its blog, “companies making this commitment recognize that AI systems may continue to exhibit weaknesses and vulnerabilities, even after thorough analysis» and more generally after they have been placed on the market. These contents must be authenticated as generated by a tool and not by a human, by not misleading the citizens. In the viewfinder of the White House, the “deepfakes“videos, allowing for example to say what you want to President Joe Biden.
The latter will meet these companies with Congress in order to question them. The objective is to draw up a decree as well as a legal text to better control AI. An executive order has already been signed by Joe Biden, which orders “federal agencies to eradicate bias in the design and use of new technologies, including AI, and to protect the public from algorithmic discrimination.»
How the “watermark» control will work: it is not yet clear
The companies concerned intend to opt for a “watermark», a mark affixed to all content so that it is indicated as such. It will have to indicate which AI tool was used during the generation. The US government welcomes this initiative, noting that “this measure allows creativity in AI to flourish while reducing the risk of fraud and deception.»

However, no concrete technical solution was mentioned. First steps have already been taken by some companies, but on a case-by-case basis.Ars Technicareports the story of a fake arrest of Donald Trump via fake footage generated by Midjourney, which has gone viral. The laboratory of the same name banned the founder of the NGO Bellingcat Eliot Higgins, who created them.
What OpenAI mentions, for example, is the implementation of tools or APIs “to determine if a particular content item has been createdwith an AI system. Between the lines, we understand that if, for example, we read an article on the Internet, we could ask OpenAI if it was written by ChatGPT. Same thing for Midjourney: a reverse search system would be available to find out if the image in question was generated by this AI. These systems will not include information about the user who generated the content.

On the Google side, on its blog the firm indicates that it wants to integrate a watermark and metadata soon into its generative tools. A button “About this picturewill be added to the search engine to know the context of an image as well as the first time it was published online.
The other measures planned by the creators of generative AIs
In addition to the future implementation of said watermarks, OpenAI, Microsoft, Google, Meta, Amazon, Anthropic and Inflection have agreed to systematically implement internal and external tests before their public availability. They will be carried out by independent experts. All also said they are investing more in cybersecurity and that reporting of vulnerabilities will be strengthened. Regarding this last point, the companies concerned will implementbounty systems, contests or prizes to incentivize responsible disclosure of weaknesses, such as unsafe behavior“.
Also, these companiesCommit to sharing information on AI risk management with broader industry, governments, civil society and academia“, specifies the White House. This includes reporting on the capabilities, limitations, appropriate and inappropriate areas of use, and societal risks (effects on equity and bias) of AIs.
Consequences for AI around the world
In addition to all of this, the Biden-Harris administration has announced that it is working with partner governments “to establish a strong international framework governing the development and use of AI.Among the countries she has spoken to are Germany, Australia, Brazil, Canada, Chile, South Korea, United Arab Emirates, France, India, Israel, Italy, Japan, Kenya, Mexico, New Zealand, Nigeria, Netherlands, Philippines, Singapore and the United Kingdom. The US government also says it is discussing with the United Nations and its member states.

These discussions took place in parallel with the development of the AI Act, a piece of legislation of the European Union. It also provides regulations regarding generative AIs. The Parliament and the Council of the European Union would like the final text to be adopted before 2024.
Do you use Google News (News in France)? You can follow your favorite media. Follow CssTricks on Google News (and ).