An interesting question about artificial intelligence was raised by Lenore Kerrigan, the country sales director for enterprise information management group, OpenText, in an article she wrote in Business Tech.
She points out that AI is here to stay, and it will get better and more powerful as time goes on. And AI, along with autonomous devices, is transforming the way we do our banking, run our businesses, and other sectors, altering basic operational and decision making within the organization yet improving efficiency and response times.
Kerrigan cites the Davos 2018 World Economic Forum meeting, held earlier this year where world leaders debated the ethics of AI, with UK Prime Minister Theresa May launching the UK’s Centre for Data Ethics and Innovation.
The primary objective of this advisory body is to work with international partners in reaching common ground on understanding “how to ensure the safe, ethical and innovative deployment of AI.”
Who will guard the guardsmen?
The debate over the safe and ethical deployment of AI may not be a new debate, says Kerrigan. She points to a 2,000-year-old question asked by the Roman poet and satirist Juvenal who asked, “Who guards the guardsmen themselves?” The phrase has evolved to include any form of power or influence that goes unchecked.
While the reference might apply to anyone in the business or political world today, at Davos 2018, concerns were raised over the challenges presented by AI and other autonomous devices that are interconnected in our world. Basically, this raises the question of who will oversee our increasingly smart computers as they make complex decisions affecting our lives?
One may wonder what will happen when we become totally dependent on artificial intelligence? What if we create an AI that thinks and even knows better about something than its human masters? This brings to mind the true incident in 1983 where a computer glitch almost started a nuclear war not so many years ago.
Kerrigan points out we are still a long way from turning every part of our lives over to AI and it hasn’t encroached too much into our personal lives, yet.
The volume of data and information created is huge
Ai has proven to be helpful in analyzing very large data sets to find patterns in areas such as healthcare or making rapid trades for financial institutions, and media buys for brands. And the volume of data created, quite often by the machines themselves, continues to grow.
And this means we will almost have to rely on AI even more. Kerrigan points out that the trend toward streaming video and a future with 4G and 5G is still only a small volume change compared to the data created by AI and its connected “edge” devices. She writes: “The bigger challenge will be the introduction of millions, then billions and eventually trillions of connected ‘edge’ devices that will virtually map the physical world in every detail and in real time.”
So how far will we allow AI to go in determining what is right for us in any given situation? And perhaps more important, how far will we allow AI to go in handling our weaponry and national security? And what about if AI algorithms get into the wrong hands? There are still so many unanswered questions.
Artificial intelligence will make our lives better, and we are seeing the results today in everything from better weather forecasting to the many amazing breakthroughs in the medical field. But as Kerrigan writes, “How we keep AI in check is partly determined by how we choose to use it in the first place.”
