Connect with us

Hi, what are you looking for?

Tech & Science

Why generative AI requires tighter regulation

Companies should actively oversee the input data used for training generative AI models.

Image: — © AFP Arif ALI
Image: — © AFP Arif ALI

Last week, OpenAI CEO Sam Altman testified in a Senate Judiciary Committee hearing to discuss critical risks and considerations for future AI regulation. While Altman and lawmakers collectively agreed that regulation is critical, Altman indicates that this conversation is far from over as next steps remain unclear. Here regulation can different forms and coverage.

Looking into the fallout is Frederik Mennes, Director of Product Management & Business Strategy at OneSpan. Mennes has considered the need for further AI regulation for innovations like ChatGPT, especially when it relates to security.

Mennes sets out the case for regulation as: “The regulation of generative AI is necessary to prevent potential harm stemming from malicious applications, such as hate speech, targeted harassment, and disinformation. Although these challenges are not new, generative AI has significantly facilitated and accelerated their execution.”

Generative AI is a type of artificial intelligence technology that can produce various types of content, including text, imagery, audio and synthetic data.

Mennes says that the regulatory framework needs to begin at the early design stage: “Companies should actively oversee the input data used for training generative AI models. Human reviewers, for instance, can eliminate images containing graphic violence.”

There also needs to be greater transparency in the process, which Mennes presents as: “Tech companies should also offer generative AI as an online service, such as an API, to allow for the incorporation of safeguards, such as verifying input data prior to feeding it into the engine or reviewing the output before presenting it to users.”

It will also be important to gather and compile data from the user experience. Mennes sees this as entailing: “Additionally, companies must consistently monitor and control user behavior. One way to do this is by establishing limitations on user conduct through clear Terms of Service.”

Looking at the leading player, Mennes finds: “For instance, OpenAI explicitly states that its tools should not be employed to generate specific categories of images and text. Furthermore, generative AI companies should employ algorithmic tools that identify potential malicious or prohibited usage. Repeat offenders can then be suspended accordingly.”

There is more that needs to be done, notes Mennes, as he says: “While these steps can help manage risks, it is crucial to acknowledge that regulation and technical controls have inherent limitations.”

However technology evolves there will always be a need for strong security measures, notably: “Motivated malicious actors are likely to seek ways to circumvent these measures, so upholding the integrity and safety of generative AI will be a constant effort in 2023 and beyond.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

World

A vendor sweats as he pulls a vegetable cart at Bangkok's biggest fresh market, with people sweltering through heatwaves across Southeast and South Asia...

Business

Catherine Berthet (L) and Naoise Ryan (R) join relatives of people killed in the Ethiopian Airlines Flight 302 Boeing 737 MAX crash at a...

Business

A diver in Myanmar works to recover a sunken ship in the Yangon River, plunging down to attach cables to the wreck and using...

World

The world's biggest economy grew 1.6 percent in the first quarter, the Commerce Department said.