Connect with us

Hi, what are you looking for?

Tech & Science

Op-Ed: ‘Ethical AI’ matters — the problem lies in defining it

Microsoft is to invest around $1 billion into the OpenAI project, a group that has Elon Musk and Amazon as members. The partners are seeking to establish “shared principles on ethics and trust”. The project is considering two streams: cognitive science, which is linked to psychology and considers the similarities between artificial intelligence and human intelligence; and machine intelligence, which is less concerned with how similar machines are to humans, and instead is focused on how systems behave in an intelligent way.

With the growth of smart technology comes an increased reliance for humanity to place trust in algorithms, that continue to evolve. Increasingly, people are asking whether an ethical framework is needed in response. It would appear so, with some machines now carrying out specific tasks more effectively than humans can. This leads to the questions ‘what is ethical AI?’ and ‘who should develop ethics and regulate them?’

AI’s ethical dilemmas

We’re already seeing examples of what can go wrong when artificial intelligence is granted too much autonomy.Amazon had to pull an artificial intelligence operated recruiting tool after it was found to be biased against female applicants. A different form of bias was associated with a recidivism machine learning-run assessment tool that was biased against black defendants. The U.S. Department of Housing and Urban Development has recently sued Facebook due to its advertising algorithms, which allow advertisers to discriminate based on characteristics such as gender and race. For similar reasons Google opted not to renew its artificial intelligence contract with the U.S. Department of Defense for undisclosed ethical concerns.

These examples outline why, at the early stages, AI produces ethical dilemmas and perhaps why some level of control is required.

Designing AI ethics

Ethics is an important design consideration as artificial intelligence technology progresses. This philosophical inquiry extends from how humanity wants AI to make decisions and with which types of decisions. This is especially important where the is potential danger (as with many autonomous car driving scenarios); and extends to a more dystopian future where AI could replace human decision-making at work and at home. In-between, one notable experiment detailed what might happen if an artificially intelligent chatbot became virulently racist, a study intended to highlights the challenges humanity might face if machines ever become super intelligent.

While there is agreement that AI needs an ethical framework, what should this framework contain? There appears to be little consensus over the definition of ethical and trustworthy AI. A starting point is in the European Union document titled “Ethics Guidelines for Trustworthy AI“. With this brief, the key criteria are for AI to be democratic, to contribute to an equitable society, to support human agency, to foster fundamental rights, and to ensure that human oversight remains in place.

These are important concerns for a liberal democracy. But how do these principles stack up with threats to the autonomy of humans, as with AI that interacts and seeks to influencing behavior, as with the Facebook Cambridge Analytica issue? Even with Google search results, the output, which is controlled by an algorithm, can have a significant influence on the behavior of users.

Furthermore, should AI be used as a weapon? If robots become sophisticated enough (and it can be proven they can ‘reason’), should they be given rights akin to a human? The questions of ethics runs very deep.

OpenAI’s aims

It is grappling with some of these issues that led to the formation of OpenAI. According to Smart2Zero, OpenAI’s primary goal is to ensure that artificial intelligence can be deployed in a way that is both safe and secure, in order that the economic benefits can be widely distributed through society. Notably this does not capture all of the European Union goals, such as how democratic principles will be protected or how human autonomy will be kept central to any AI application.

As a consequence of Microsoft joining of the consortium, OpenAI will seek to develop advanced AI models built upon Microsoft’s Azure cloud computing platform. There are few specific details of how the project will progress.

Commenting on Microsoft’s big investment and commitment to the project, Microsoft chief executive Satya Nadella does not shed much light: “AI is one of the most transformative technologies of our time and has the potential to help solve many of our world’s most pressing challenges…our ambition is to democratize AI.”

Do we need regulation?

It is probable that the OpenAI project will place business first, and it will no doubt seek to reduce areas of bias. This in itself is key to the goals of the partners involved. For wider ethical issues it will be down to governments and academia to develop strong frameworks, and for these to gain public acceptance, and then for an appropriate regulatory structure to be put in place.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Social Media

You can’t just tell kids not to use screens. Screens are unavoidable. Stress can be avoided.  

Entertainment

Actor, stuntman, and martial artist Marko Zaror of "John Wick: Chapter 4" chatted about his upcoming movies.

Tech & Science

Image: — © AFPKilian FICHOUUbisoft’s battle to maintain its share price has become almost as epic as its “Assassin’s Creed” franchise as the video...

Business

The SpaceX Starship sits on a launch pad at Starbase near Boca Chica, Texas ahead of the Starship Flight 5 test - Copyright AFP...