Has the time come for increased regulation of the newer forms of artificial intelligence that are emerging? While many agree that a form of regulation is required, there is no consensus as to what shape this should take. The U.K. government, for example, is very much against a statutory body overseeing regulation and wants the technology industry to self-manage their affairs.
Industry commentator Frederik Mennes, Director of Product Management & Business Strategy at OneSpan, says there is need for strict AI regulation.
Mennes explains to Digital Journal why regulation, together with joined up policy thinking, is necessary: “The regulation of generative AI is necessary to prevent potential harm stemming from malicious applications, such as hate speech, targeted harassment, and disinformation.”
Mennes notes that: “Although these challenges are not new, generative AI has significantly facilitated and accelerated their execution.” In other words, the advancement in AI is accelerating and extenuating many of the problems that society faces with authenticity of content and the ethics of those seeking to deliver it.
In order to build a suitable framework, there is input required by the business sector, says Mennes, stating: “Companies should actively oversee the input data used for training generative AI models. Human reviewers, for instance, can eliminate images containing graphic violence.”
Control is also an important part in ensuring that regulation works. Mennes recommends: “Tech companies should also offer generative AI as an online service, such as an API, to allow for the incorporation of safeguards, such as verifying input data prior to feeding it into the engine or reviewing the output before presenting it to users.”
In order to assess how effective regulation is and whether it needs adjusting, data is required. Mennes considers data analytics: “Additionally, companies must consistently monitor and control user behaviour. One way to do this is by establishing limitations on user conduct through clear Terms of Service. For instance, OpenAI explicitly states that its tools should not be employed to generate specific categories of images and text.”
Furthermore, Mennes proposed: “Generative AI companies should employ algorithmic tools that identify potential malicious or prohibited usage. Repeat offenders can then be suspended accordingly”
Mennes is not necessarily sure these regulatory approaches will be sufficient: “While these steps can help manage risks, it is crucial to acknowledge that regulation and technical controls have inherent limitations. Motivated malicious actors are likely to seek ways to circumvent these measures, so upholding the integrity and safety of generative AI will be a constant effort in 2023 and beyond.”
This may mean greater cooperation by governments and a tighter regulatory framework, to pull the technology industry into line. It may also be preferable not to have the technology industry involved in regulating their own activities.