OpenAI Fined $15.5 Million by Italian Privacy Watchdog for Using ChatGPT Data

Italy’s privacy regulator has imposed a fine of €15 million ($15.5 million) on OpenAI, citing violations in the company’s handling of personal data to train its ChatGPT AI model.

Key Issues Highlighted

The Italian Data Protection Authority announced on Friday that OpenAI processed users’ personal information without establishing a clear legal basis for doing so. Additionally, the company failed to comply with the country’s transparency requirements regarding user data practices.

The regulator also raised concerns about OpenAI’s handling of a data breach in March 2023, which the company reportedly did not disclose. Another issue cited was the lack of effective age verification mechanisms, potentially exposing minors to inappropriate content generated by ChatGPT.

Investigation and Penalties

This fine follows a probe initiated by the Italian watchdog in March 2023, focusing on OpenAI’s data processing practices. Alongside the financial penalty, the regulator has mandated that OpenAI launch a six-month public awareness campaign. This campaign must be conducted across various media platforms, including radio, television, print, and online outlets, to educate the public about ChatGPT and its data handling procedures.

OpenAI’s Response

A spokesperson for OpenAI expressed the company’s intention to appeal the decision, describing the fine as excessive and detrimental to Italy’s ambitions in artificial intelligence.

“When the Garante ordered us to cease offering ChatGPT in Italy in 2023, we collaborated with them to bring the service back online within a month. They have since acknowledged our leading efforts in privacy protection for AI. However, this penalty amounts to nearly 20 times the revenue we generated in Italy during the relevant timeframe,” the spokesperson said.

Despite the disagreement, OpenAI emphasized its continued commitment to working with regulators and adhering to local privacy laws.

The fine and investigation underscore growing scrutiny of AI companies and their data practices, particularly in Europe, where privacy regulations like the General Data Protection Regulation (GDPR) set stringent standards. OpenAI’s situation serves as a reminder of the potential risks and challenges AI developers face in navigating complex regulatory landscapes while expanding globally.

As OpenAI appeals the decision, the case could set a precedent for how AI companies address privacy concerns and regulatory demands in the future.

Latest articles