You should not entrust anything to ChatGPT, Google Bard or Bing

AI chatbots can poorly protect your personal data and leak your conversations. This had been proven in the past with ChatGPT: today it is the turn of Google Bard. His conversations were found indexed on Google, allowing anyone to view them.

Conversations between users and Google Bard were published… on Google. For a while, the search engine indexed conversations whose users had created a share link. Everything has since returned to normal, but it is a new controversy around AI chatbots and privacy.

Conversations with Google Bard… in the wild

It was SEO consultant Gagan Ghotra who sounded the alarm on (formerly Twitter) this September 26. He posted a screenshot of a Google search, finding all conversations with Google Bard that had been shared by users. However, the latter had at no time authorized their indexing on a search engine.

In fact, a Bard feature allows you to create a link containing a conversation. It is useful for sharing your experiments with loved ones or colleagues. In theory, using keywords, we could find conversations on certain themes and their content could help identify those who generated Bard’s responses.

In response to this, the Google SearchLiaison account (dedicated to the search engine) responded on that ” Bard allows people to share chats if they want. We also do not intend for these shared discussions to be indexed by Google Search. We are currently working to prevent their indexing.» Since then, the same search does not lead to any pages, which means that the bug has been fixed.

This isn’t the first time chatbots have gone wrong

Last April, we learned that Samsung had learned of a leak of confidential information because of ChatGPT. Engineers from the company used the chatbot in their work. Measures were then taken to avoid these potential leaks. Samsung was doing well, since a month earlier, OpenAI had temporarily cut off ChatGPT, reported theDigital Century. A bug allowed users to have access to other users’ conversation history. If you couldn’t see the content, you could see the name. More recently, in August, a flaw allowed a random response from ChatGPT, coming from another user’s conversation. The protection of ChatGPT’s personal data was so problematic that Italy banned the service on its territory for a month, before reauthorizing it.

The recommendation we can make is not to share any personal information with chatbots: Google Bard, ChatGPT, Bing Chat and others. Firstly because the discussion sharing links, whether on ChatGPT or on Bard, are public. That is to say, from the URL address, anyone can view its content.


Similar Posts