Italy has announced a temporary ban to ChatGPT, according to the BBC. The ban was unexpected among the technology community and at its heart is the issue of data privacy and the growing popularity of chatbot technology.
ChatGPT is able to answer user questions using natural, human-like language and it has the ability to mimic other writing styles, using information drawn from the Internet as it was in 2021 as its database.
Ron Moscona is a partner at the international law firm Dorsey & Whitney in its London office and he is an expert in technology and data privacy.
Of Italy’s move to ban ChatGPT, Moscona tells Digital Journal: “The ban by the Italian regulators comes as a surprise. Not much detail is available but it appears this was triggered by an announcement by OpenAI a few days ago regarding a data breach incident.”
As well as being unexpected, the measure is also atypical says Moscona: It is unusual to completely ban a service because of a data breach incident. Time will tell if other regulators follow the Italian example.”
As to the underlying reasons, Moscona considers: “There is growing nervousness about chatbots which reflects the significant leap forward that the technology represents and the way it very quickly emerged onto the market and became massively popular since late last year.”
The special case presented by chatbots requires additional protections. Moscona notes: “Although in many respects, the risks to privacy or online safety of children that chatbots pose are not different in principle than the risk posed by search engines and social medial platforms.”
Furthermore: “The fact that the interaction with the chatbot is so much richer and more detailed than with any other digital technologies available to the general public – and the expectation that the operators of the chatbots will collect a great deal of very rich data very quickly from large numbers of people – raises serious concerns that the potential of this technology to cause harm and threaten personal privacy might actually be a lot greater than the privacy threats posed by standard search engines, content sharing services, social media services and other digital platforms”.
There are other considerations, says Moscona: “Chatbot technology raises a lot of issues and risks that go beyond privacy. Some of those concerns have to do with how users (including criminals and hostile state agencies on the one hand) might be able to use the technology and how users, in particular children and vulnerable people, but also the public in general as well as businesses, governments and other institutions – will rely on the technology in the future.”
Moreover, there is the interface with people. Moscona defines this as: “A big element of this technology is its capability of mimicking humans, making it a perfect tool for fraudsters and for anyone who wants to deceive. There are also concerns as to the magnitude of disruption that the technology could introduce, including by potentially replacing and eliminating jobs…Those concerns and many others will be multiplied if and when AI chatbot services will become available from multiple providers and not just from leading companies like Microsoft (OpenAI) and Google.”
In terms of how the regulatory framework will later, Moscona wonders: “The regulation of digital technologies (at least in liberal democratic countries) has always lagged years behind the development and spread of the technologies themselves. This is unlikely to change. Only very recently the European Union and the UK introduced a comprehensive regulatory regime to protect children from harm online – 25 years after the internet became available to every household (in the UK it is yet to become law).”
Different societies have adopted different courses of action. With North America, according to Moscona: “The United States has not even come close to introducing a similar comprehensive regulatory framework. Legislatures and regulators take many years to understand the issues arising from digital technologies and to develop the legislation to address them. Unfortunately, it is unlikely that legislatures in the West will be able to come up with a comprehensive set of policies to deal with the challenges raised by chatbots any time soon and it is unlikely that the industry will stop the spread of those tools. The action by the Italian data protection regulator is probably going to remain an unusual example.”
In contrast, other nations will move in more restrictive directions. Moscona points out: “Some authoritarian countries, on the other hand, are likely to move more quickly to restrict the availability of those technologies to the general public which can help enormously in the dissemination of information and ideas and in the education of the public – things that authoritarian regimes prefer to control tightly.”
