An Italian watchdog says that ChatGPT, an AI-powered chatbot, has broken rules about keeping data safe.
Italy’s Data Protection Authority (DPA) looked into a situation and found that there were privacy problems with how data was being handled. However, it didn’t say exactly what the problems were.
The chatbot started in 2022 and needs a lot of information from the internet to work.
OpenAI, the company that made ChatGPT, has 30 days to give its response. The BBC has reached out to OpenAI for a statement.
Italy has strongly supported protecting data when it comes to ChatGPT.
It was the first country in the Western region to stop the product in March 2023 because they were worried about people’s privacy.
ChatGPT was allowed again about four weeks later, after saying it had fixed the problems the DPA had brought up.
Italy’s data protection authority started an investigation and found that there have been privacy breaches.
In a statement, the DPA said that they found evidence showing that there were violations of the rules in the EU GDPR.
Under the EU’s GDPR law, companies that don’t follow the rules can be fined up to 4% of their total income.
Italy’s Data Protection Authority (DPA) is working with the European Union’s European Data Protection Board to keep an eye on ChatGPT. They created a special team to do this in April 2023.
When ChatGPT was allowed again in Italy in April 2023, the Italian regulator told the BBC that they were happy with the changes OpenAI made, but they wanted even more rules to be followed.
A person speaking for the company said they want to do more to check people’s ages and tell Italians about their right to not have their personal information used for training computers.
An OpenAI representative said they would keep talking with the regulator.
OpenAI is closely connected to the big company Microsoft, which has put a lot of money into it.
Microsoft has added AI to its Bing search engine and to its Office 365 apps like Word, Teams, and Outlook.