top of page

POLAND: OpenAI Complaint Alleges Multiple GDPR Violations

According to a complaint lodged with the Office for Personal Data Protection (DPA) in Poland, OpenAI has violated several GDPR regulations, including improper handling of training data and noncompliance with transparency requirements.



Lukasz Olejnik, a researcher on privacy and security working for a Warsaw-based legal company, filed the lawsuit. He claims that ChatGPT violated the GDPR in relation to data access, fairness, openness, and legitimate basis for data processing, as well as personal privacy. The well-known chatbot is available throughout Europe, but it has encountered various difficulties, such as a brief ban in Italy due to privacy concerns and multiple continuing investigations.

The complaint, which is roughly seventeen pages long, claims that OpenAI violated the GDPR right away when it started doing business in Europe. It starts with an Article 36 allegation that the company failed to properly consult with EU regulators before starting to interact with users and gather their personal data. The business did not carry out a preliminary risk analysis of ChatGPT to identify any possible problems that would necessitate the implementation of mitigation measures.

According to OpenAI, it tries to filter out personal data and does not feed it into its training model. It does, however, accept personal information, and occasionally it seems to "stick" in ChatGPT's memory and may end up in subsequent outputs. This is precisely the circumstance that gives rise to Olejnik's lawsuit; he claims to have personally tried to use the chatbot to create a biography of his own life. He then requested that OpenAI disclose the source of some of the personal data it had collected about certain individuals in order to comply with GDPR requirements.


Throughout a protracted email conversation that lasted from late March to June of this year, Olejnik and OpenAI exchanged information, but Olejnik only obtained a fraction of the Subject Access Request (SAR) data that is protected by the GDPR provisions. Anything that would have even somewhat revealed information regarding ChatGPT's internal operations was conspicuously absent from the study.


In addition, Olejnik pointed out that "hallucinations" and the self-assured proclamation of facts that is obviously false are another prevalent chatbot problem to date. He urged OpenAI to fix several of the inaccuracies in the biography that it produced. Instead of correcting this information, OpenAI's initial response was to prevent all requests to ChatGPT using his name. Upon investigating the matter further, he was informed by OpenAI that such modifications were beyond its purview. All of these incidents could be considered GDPR violations as the laws protect individuals' rights to access and update personal information that businesses retain.



Additionally, it claims that neither OpenAI nor its attempts to convey one to the end user have any legal justification for processing personal data in accordance with GDPR. Additionally, it does not give the user enough information about what personal data is being gathered or how it is being used, failing the regulatory transparency test.


 

Can Chat GPT comply?


Some are certain that OpenAI just doesn't care about the GDPR because of the broad pattern of possible infractions. OpenAI may be observing what companies like Facebook and Google are able to get away with and determining that charging ahead while possibly paying some fines along the way is a viable strategy for something so popular and potentially revolutionary, even though no company would want to court fines and potential bans that span Europe.


However, the company's circumstances are not as cozy as those of other Dublin-based software companies. OpenAI is currently vulnerable to regulatory sniping from any member of the EU since it lacks a central EU state to call home. Its largest controversy to yet occurred in Italy, where it was briefly outlawed before resolving related issues with its legal foundation for processing data, transparency, and the capacity to recognize and provide extra protection for minors.


By implementing age verification, providing greater information about data processing, and including a series of opt-out toggles that allow input to be diverted from the training algorithm, ChatGPT was able to work its way out of its current issues in Italy. However, the Italian DPA, along with France, Spain, and the Netherlands, is still looking into it. And the development of more comprehensive rules for AI-powered goods by the European Data Protection Board (EDPB) is the backdrop against which all of this occurs.

Although OpenAI is not subject to GDPR violations in the US, it is being sued in a number of civil cases there for allegedly utilizing a wide range of intellectual property as training materials without authorization. The majority of criticism has come from book writers alleging that OpenAI did not get the necessary licenses for the inclusion of their works (or even buy a copy). However, NPR has learned that attorneys at the New York Times are considering filing a lawsuit against the AI for its use in scanning newspaper archives.


Given that Olejnik's lawyer estimates the DPA's probe might take anywhere from six months to two years to complete, Olejnik may have to wait a while to receive justice back in Poland.



1 view0 comments

Comments


bottom of page