How is artificial intelligence being viewed by the population in the U.S.? According to Swimlane’s new report: Regulation vs. Reality: Are the Fed’s Attempts at Wrangling Incident Disclosure Effective?, under the section titled ‘Consensus on AI Regulation’ it is noted that a substantial majority (83 percent) of respondents believe there should be regulations on the development and use of AI.
When asked about the biggest challenges they currently face in adopting or expanding the use of AI within the organization, most respondents (58 percent) cited balancing the need for data collection and analysis with maintaining adherence to data privacy regulations and user trust.
These findings make developments in the regulatory space even more interesting.
Earlier this year, the Colorado State legislature passed the U.S.’s first comprehensive artificial intelligence ‘protection law’. This legal framework was aimed at curbing the discriminatory effects of ‘high-risk’ AI.
Considering how this piece of legislation is being implemented, for Digital Journal, is Kevin Kirkwood, CISO as Exabeam.
Kirkwood states, in considering the regulation: “The law is an interesting step in mostly the right direction. By virtue of the fact that humans are creating the code and building the pools that the code uses to ‘learn’ from, it will almost certainly carry the bias of the person or persons creating the code.”
In terms of what the scope of the bill is, Kirkwood notes: “The ‘high risk’ that the bill calls out is when the AI is directly and/or indirectly used in making a decision by the code or a human that will result in a discriminatory act.”
As to what the result of this is, Kirkwood spells out: “The individual in this case may be tasked with conducting a personal risk assessment based on information that is presented on the websites of the developer of the AI code. This will be difficult for individuals who may not be equipped with the ability to assess the overall risk of bias and how the developer intended for the AI to act.”
The responsibilities of those who create AI does not end here. Kirkwood points out: “The developer also holds the responsibility to assess the bias risk and continually test the product for potential bias. This continual testing will be based on the materiality of the change. This will create a need to ensure the product website is updated in a timely fashion upon the completion of a material change. The developer will be in a better position to assess the risk and inform the users of that risk.”
The essential thrust of the law within the state now means that, Kirkwood observes: “Companies should not be allowing AI to make decisions but should use this new technology to augment a user to make that final decision. While this carries the same flaw of introducing the human, it makes the individual culpable for the action along with the company, and there are already laws for this.”
Summing up the future landscape, Kirkwood predicts: “The biggest challenge to the law will be to quantify bias and fully test for that problem.”
