top of page

Are supervisory authorities on a witch hunt: how Chat GPT interferes with privacy

Updated: Jun 11, 2023

Read our founder's Ksenia Laputko opinion on what is happening with AI. The amount of the privacy laws is rising all around the globe, however they are still failing to protect individuals from being watched and exposed. Research shows 9 in 10 Americans are worried hackers will access their personal or financial information and use it for nefarious purposes.

ChatGPT have received heavy criticism from supervisory authorities, privacy experts, and users who are concerned about its data protection policies. During past weeks, data protection supervisory authorities raised their concerns. Italian government temporarily banned the tech last month, with authorities in Germany considering similar action. Spain’s AEPD watchdog this week has also launched an investigation into potential data breaches by ChatGPT.
And, even European Data Protection Board (EDPB), an independent body which unites national privacy watchdogs across Europe, has set up a task force on Open AI’s ChatGPT chatbot. EDPB does not seek to punish Open AI, but to create more transparent policies.
In this article tries to analyse if the level of anxiety around the risks of AI has any sense or that is just the witch hunt.
Generative AI, the prompt-based artificial intelligence that can generate text, images, music and other media in seconds, continues to advance at breakneck speeds.
People put huge amount of their personal data in the Internet. The authors of this ocean of content have not been asked if OpenAI can use this data, and their privacy is not verified in any way. Any personal information that has entered the ChatGPT database can be given to anybody in response to a correctly formulated request, and incomparably more accurately than any search engine will do.
The simplest example is if you wrote your phone number in a comment on a social network, and it got into the neuron database, then when someone asks her: “What is John Doe’s phone number?” AI will give you the correct answer.
Privacy Policy of ChatGPT’s privacy policy sheds some light on how the AI gathers its information. There are three sources:
1)Account information that you enter when you sign up or pay for a premium plan.
2)Information that you type into the chatbot itself.
3)Identifying data it pulls from your device or browser, like your IP address and location.

As we see, the most of the data that it keeps isn’t particularly alarming. In fact, it’s pretty standard—you could expect almost any site you have an account with to know these things about you.

One of the risks for people is that even though the hype about privacy is growing, most of people still don’t pay much attention to it. So for them, the real risk is that AI collects data from individuals conversating with ChatGPT.If a person isn’t a “privacy nerd” it’s extremely easy to feed their private information by mistake. Just forget to censor a document that person asks AI to proofread, and person might be in real trouble. By the way, the service itself cautions about that “You use the Service at your own risk. We implement commercially reasonable technical, administrative, and organizational measures to protect Personal Information both online and offline from loss, misuse, and unauthorized access, disclosure, alteration, or destruction. However, no Internet or e-mail transmission is ever fully secure or error free. In particular, e-mail sent to or from us may not be secure. Therefore, you should take special care in deciding what information you send to us via the Service or e-mail. In addition, we are not responsible for circumvention of any privacy settings or security measures contained on the Service, or third party websites.”
Lets translate that to “english”. ChatGPT records everything you type into it. Its privacy policy states that when you use ChatGPT, it may collect personal information from your messages, any files you upload, and any feedback you provide.
Secondly, AI is still trained, and it is using personal data availble in internet, for training. This again is a risk for individuals. As we know from Google Spain SL and Google Inc. v Agencia Española de Protección de Datos (AEPD) and Mario Costeja González it is incredibly hard to take the personal data of internet. Considering the right of data subjects to be forgotten, it is very likely that AI would actually forget data it has learned from, even if the original source of information is erased.

Thirdly, we all don’t want our personal data misused. Taking into account amount of the personal data collected by AI, you imagine that giants as Meta, Google and etc or governments considering it as pure treasure. OpenAI gives very vague information about who it shares your data with, and for what reason. It says it may provide your personal information to vendors and service providers to assist in meeting business needs and performing certain functions. These providers include web hosting services, cloud services, other IT providers, event managers, email services, and analytics services.
Fourthly, it is unclear, how the data subjects can make a request.

In our opinion, considering all the facts, it is pretty rightful that the watchdogs, supervisory authorities are checking the AI. Even though AI can simplify our lives in many aspects, the risks to our privacy are significant. Therefore, it has to have clear regulations and be way more transparent.

49 views0 comments


bottom of page