Connect with us

Hi, what are you looking for?

Tech & Science

Restricting electronic brains: Will attempts to regulate AI security succeed?

We need to level the playing field by homogenising guidance across national states and limiting a race to the bottom with AI tech.

The software draws on an artificial intelligence dialogue system dubbed 'Buddhabot'
The software draws on an artificial intelligence dialogue system dubbed 'Buddhabot' - Copyright AFP Behrouz MEHRI
The software draws on an artificial intelligence dialogue system dubbed 'Buddhabot' - Copyright AFP Behrouz MEHRI

Together governments from the U.K. and U.S., with endorsement from 18 countries, have developed the first global guidelines for AI cybersecurity. Led by the UK’s NCSC and the US’s CISA, these guidelines promote a ‘secure by design’ approach in AI development, involving global collaboration and advice from industry experts. This approach means building in the essential security and personal protection at the outset rather than trying to bolt these on once a system has been developed.

Josh Davies, Principal Technical Manager at cybersecurity company Fortra thinks that such measures are necessary. However, there are likely challenges and obstacles ahead.

Davies observes how: “The AI arms race and rapid adoption of open AI systems have created concerns in the cybersecurity sector around the impact of a supply chain compromise – where the AI source code is compromised and used as a trusted delivery mechanism to pass on the compromise to third party users.”

The way this can be tackled is through the emergent policy frameworks, as Davies notes: “These guidelines look to secure the design, development, and deployment of AI which will help reduce the likelihood of this type of attack.”

Attempting regulation on anything other than an international scale is likely to falter. In this context, Davies states: “As systems and nation states are increasingly interdependent on each other, global buy in is crucial. We have already seen how collective security is important, otherwise threats are allowed to grow, become more sophisticated, and attack global targets. Ransomware criminal families are a prime example.”

The consequence of this is to “level the playing field by homogenising guidance across national states and limiting a race to the bottom with AI tech.”

Discussing the guidance in further detail, Davies draws out: “The guidelines recommend the use of red teaming. Red teaming surfaces the gaps in systems, and security strategies, and ties them directly to an impact. The AI Executive Order also mandates red teaming to identify flaws and vulnerabilities in AI systems. Mandating red teaming future proofs these guidelines (and other regulations) as it is hard to anticipate the threats of tomorrow and the appropriate mitigations – especially at the pace governments can legislate.”

In interpreting this, Davies finds: “It’s an indirect way of saying you need to make sure that your security strategies are always up to date, because if not, attackers will surely find and expose your gaps. This is important as we have seen other security regulations quickly become outdated and redundant as controls cannot be agreed upon and updated at the pace required to achieve good security.”

This leads to a series of key questions that Davies poses:

  • Will we see adoption?
  • Or does it just serve to re-assure the public that AI issues are being considered?
  • What is the consequence of not following the guidance?

In considering these, Davies answers: “I would hope to see soft enforcement through the exclusion of organisations that cannot show adherence to guidance from government or B2B collaborations.”

How will these regulations be enforced? Davies ponders: “Without any punitive measures, a cynic would say organizations have no motivation to implement the recommendations properly. An optimist might lean on the red team reports and hope for buy in on reporting flaws and issues, removing the ‘black box’ nature of AI which some executives have hid behind, and opening up these leaders to the court of public opinion if there is evidence they were aware of a flaw and did not take appropriate action, resulting in a compromise and/or data breach.”

Davies sees the guidance as establishing the minimum set of requirements: “These guidelines are a step in the right direction. They pull together key AI stakeholders, from nation states and industry, and call for collaboration and consideration of the security of AI. Hopefully this is a continued theme, as we’ve seen with the United States AI executive order, and that AI systems are developed responsibly, without stifling innovation and adoption.”

In terms of likely success, Davies considers: “My personal opinion is that the real value we might see from such collaboration will be when we do see a large-scale AI compromise. Hopefully the involved parties are brave enough to lift the lid on what happened so everyone can learn how to be better prepared, and we can define further guidance (preferably as a requirement) beyond just secure build practices and a general monitoring requirement. But this is a good start. Is it ground breaking? In my opinion, no. Security teams should already be looking to apply the principles outlined to any technological development. This has taken long standing DevSecOps principles and applied them to AI. I would expect it will have the most impact on startups entering the space, i.e. those without an existing level of security maturity.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Tech & Science

A part of the US surveillance program known as Section 702 allows intelligence agencies to conduct warrantless electronic monitoring of foreigners outside the United...

World

Roberto Cavalli dressed A-listers for decades and was known for his exotic animal prints and feather designs - Copyright AFP Brendan SmialowskiAlexandria SAGEItalian fashion...

Life

Salman Rushdie, targeted for assassination since 1989 over his writing, had long wondered who would kill him.

Tech & Science

Many believe the privacy policy is the most important deciding factor when downloading a new app.