Email
Password
Remember meForgot password?
    Log in with Twitter

article imageAI ethics center launched to assess machine bias

By Tim Sandle     Dec 16, 2018 in Technology
The advances of artificial intelligence together with machine learning into greater areas of life requires a review of the ethical and human-facing implications, according to the University of Guelph. A new hub has been launched to address such issues.
The University of Guelph (Ontario, Canada) has launched an artificial intelligence ethics center to bring academics, industry and policy makers together. The hub has been created amid the expanding debate around data privacy and charges of bias with some forms of artificial intelligence. The center will also consider how humans and machines interact and what these means for the human psyche.
Reported by The Star, the Centre for Advancing Responsible and Ethical Artificial Intelligence (CARE-AI) aims to bring together experts to study and teach humanist approaches to artificial intelligence. The scope extends to everything from medical imaging to automated credit card approvals.
Speaking about the launch, Professor Graham Taylor said: "AI has the potential to do harm and the potential to improve life...We want to connect researchers trying to solve real problems that are important to people."
As technology advances, there is a need to discuss and to debate the ethical issues raised by rapidly-developing technologies. In addition, there is a need to set out best practices around data use (like accountability and permitted data uses) and also to identifying where new regulations may required. Regulations could, for example, be required in cases of bias. Algorithms make use of data about past behavior, which means biases embedded in the data can be reinforced and strengthened over time.
With the issue of 'human values', without considered programming, artificial intelligence systems have no default values, which means that ethical design and application needs to be central to the way that intelligence is created. This needs to include ensuring that robots are not be designed primarily to kill or harm humans; although this latter point is challenging where advanced technologies are built for military purposes.
In related news, the U.S. security community have concluded that both quantum computing artificial intelligence are 'emerging threats' which need to be considered at the same level as conventional terror attacks.
More about Artificial intelligence, Ethics, machine learning
More news from
Latest News
Top News