Led by Hugo Larochelle and Matthew Jackson, from the Canadian Institute For Advanced Research, a team of international scientists and technologists are making the case for the creation of a new scientific discipline termed machine behaviour. This, they argue, is necessary to advance our understanding of how artificial intelligence will affect society, culture, the economy and politics.
The researchers have set out their case in the science journal Nature, penning a paper called “Machine behaviour.” In the paper they note how machines, based on artificial intelligence, are starting to influence social, cultural, economic and political interactions. Consequently, understanding the behavior of artificial intelligence systems is necessary if humanity is to control the actions of machines and to maximize the benefits that artificial intelligence presents as well as minimizing any harms.
To do this requires a new academic discourse, the researchers contend and this needs to go beyond the confines of computer science and embrace perspectives from all of the sciences. At one point they write: “Machine behavior similarly cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate.”
Although the people who program AI have considerable responsibility for how the AI works, the researchers also call for how machines themselves operate within a “larger socio-technical fabric”, and while machines do not (yet) have agency in the sense of people or animals, machine behaviors do vary with altered environmental inputs. Sometimes machines display forms of intelligence and behavior that are qualitatively different to people or animals, and this requires a new level of understanding and interpretation. The level of interpretation required needs cross-disciplinary efforts.
In terms of these key questions, Hugo Larochelle picks how humans interact with bots in an interview posted on the CIFAR website. He comments on the issues about how we feel about conversations that involve machines: “Do conversations with machines feel different, or similar? What are the implications of that? I think there are some really interesting questions there that we haven’t quite explored yet.”
Matthew Jackson discusses the need for social analysis when artificial intelligence is developed. He poses the following dilemmas as part of the same CIFAR feature: “Questions about who to prioritize when you program a self-driving car. Every time a company writes down an algorithm to change your newsfeed, or to make new suggestions of who you should be friends with, it’s taking a moral and ethical stand.”
It is for these reasons that the Nature paper makes the case for new scientific discipline to study the broad effects of AI.