Italian regulators say ChatGPT must meet local and GDPR privacy regulations by April 30, but AI experts say the model’s architecture makes such compliance almost impossible.
OpenAI may soon face its biggest regulatory challenge yet, as Italian authorities insist the company has until April 30 to comply with local and European data protection and privacy laws, a task artificial intelligence (AI) experts say could be near impossible.
Italian authorities issued a blanket ban on OpenAI’s GPT products in late March, becoming the first Western country to outright shun the products. The action came on the heels of a data breach wherein ChatGPT and GPT API customers could see data generated by other users.
Per a Bing-powered translation of the Italian order commanding OpenAI to cease its ChatGPT operations in the country until it’s able to demonstrate compliance:
“In its order, the Italian SA highlights that no information is provided to users and data subjects whose data are collected by Open AI; more importantly, there appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.”
The Italian complaint goes on to state that OpenAI must also implement age verification measures in order to ensure that its software and services are compliant with the company’s own terms of service requiring users be over the age of 13.
In order to achieve privacy compliance in Italy and throughout the rest of the European Union, OpenAI will have to provide a basis for its sweeping data collection processes.
Under the EU’s General Data Protection Regulation (GDPR), tech outfits must receive user consent to train their products with personal data. Furthermore, companies operating in Europe must also give Europeans the option to opt out of data collection and sharing.
According to experts, this will prove a difficult challenge for OpenAI because its models are trained on massive data troves that are scraped from the internet and conflated into training sets. This form of black box training aims to create a paradigm called “emergence,” where useful traits manifest unpredictably in models.
Unfortunately, this means that the developers seldom have any way of knowing exactly what’s in the data set. And because the machine tends to conflate multiple data points as it generates outputs, it may be beyond the scope of modern technicians to extricate or modify individual pieces of data.
Margaret Mitchell, an AI ethics expert, told MIT Technology Review that it will be extremely difficult for OpenAI to identify individuals’ data and pull it out of its models.
To reach compliance, OpenAI will have to demonstrate that it obtained the data used to train its models with user consent — something the company’s research papers show isn’t true — or demonstrate that it had a “legitimate interest” in scraping the data in the first place.
Lilian Edwards, an internet law professor at Newcastle University, told MIT’s Technology Review that the dispute is bigger than just the Italian action, stating that the violations are so significant that the case will likely wind up in the EU’s highest court, the Court of Justice.
This puts OpenAI in a potentially precarious position. If it can’t identify and remove individual data per user requests nor make changes to data that misrepresents people, it may find itself unable to operate its ChatGPT products in Italy after the April 30 deadline.
The company’s problems may not stop there, as French, German, Irish and EU regulators are also currently considering action to regulate ChatGPT.
Source : Cointelegraph.com