Italy restricts ChatGPT and launches investigation due to privacy worries

Posted on

After the Data Protection Authority of the Italian government briefly banned ChatGPT on Friday and opened an investigation into the artificial intelligence program’s alleged violation of privacy laws, OpenAI took ChatGPT offline in Italy.

The organization, also known as Garante, charged Microsoft-backed OpenAI (MSFT.O) with neglecting to verify the legal age of ChatGPT users, who are required to be 13 or older.

According to Garante, ChatGPT lacks “any legal basis that justifies the massive collection and storage of personal data” for the purpose of “training” the chatbot. OpenAI has 20 days to provide a remedy or face a fine of up to 20 million euros ($21.68 million), which represents 4% of its yearly global revenue.

According to OpenAI, ChatGPT has been turned off for users in Italy at the Garante’s request.

Italy could not access the page. The owner of the website may have put restrictions in place that prohibit users from accessing it, according to a notice on the ChatGPT webpage.

We actively work to reduce personal data when developing ChatGPT and other AI systems because we want our AI to learn about the world, not specific people, according to OpenAI.

Italy is the first Western nation to take action against an AI-powered chatbot, and has temporarily restricted ChatGPT’s use of domestic users’ confidential data.

Additionally, the robot is not accessible in regions of Africa, Iran, Russia, Iran, Hong Kong, and mainland China where locals cannot open OpenAI accounts.

Since it was introduced last year, ChatGPT has sparked a tech craze, inspiring competitors to release comparable products and businesses to incorporate it or related technologies into their applications and goods.

Lawmakers in several nations have become interested in the technology’s quick growth. Because AI may have an effect on national security, employment, and education, many experts claim that new laws are required to regulate it.

The enforcement of the General Data Protection Regulation is the responsibility of the EU data protection authorities, according to a spokesperson for the European Commission. “We expect all companies active in the EU to respect EU data protection rules,” the spokesperson added.

Margrethe Vestager, the executive vice president of the European Commission, tweeted that the Commission, which is debating the EU AI Act, may not be tempted to outlaw AI.

We must continue to advance our freedoms and safeguard our rights regardless of the technology we use, she said, adding that this is why we don’t regulate AI technologies but rather their applications. “Let’s not throw away what has taken decades to build in a few years.”

In an open letter published on Wednesday, Elon Musk called for a six-month moratorium on the creation of systems that are more potent than OpenAI’s recently released GPT-4 and cited possible risks to society.

Information about how OpenAI develops its AI model is not publicly available.

The actual issue, according to AI researcher and associate professor Johanna Björklund of Ume University in Sweden, is a lack of transparency. “You should be very transparent about how you conduct AI research,” the author advised.

A UBS research released last month estimated that ChatGPT reached 100 million monthly active users in January, just two months after its debut, making it the fastest-growing consumer application in history.