Connect with us

Hi, what are you looking for?

Tech & Science

Q&A: Minerva University incubates ethical AI for the greater good of the world

When it comes to the future of AI, the possibilities are limitless. There are a few big forces that are reasonably predictable in the next five to ten years.

Robots, big data as Gulf nations bet on AI
Gulf countries are investing heavily in artificial intelligence as they seek to move away from their reliance on fossil fuels - Copyright AFP Jessica YANG
Gulf countries are investing heavily in artificial intelligence as they seek to move away from their reliance on fossil fuels - Copyright AFP Jessica YANG

Artificial intelligence (AI) is set to continue to be an important feature in larger businesses and the technology is improving daily. There are potential ethical risks involved with the technology. However, in the right hands, it has the potential to do powerful things for the greater good.

The Masason Foundation/Minerva University AI Research Lab is a collaborative AI research and start-up incubation program for undergraduate students. Students are supported by local business partners, AI mentors, and Minerva faculty.

Projects and start-up concepts emerging from the lab have included ideas to address challenges around climate, nutrition, and women’s reproductive health. To discover more, Digital Journal caught up with Mike Magee, President of Minerva University and Patrick Watson, Ph.D., Associate Professor of Computational Sciences at Minerva University.

Mike Magee is the President of Minerva University. Prior to joining Minerva, he was the founding CEO of Chiefs for Change, a non-profit organization supporting leaders of many of the nation’s largest and most innovative K-12 public education systems. Previously, Magee co-founded and was CEO of Rhode Island Mayoral Academies (RIMA). As CEO of RIMA, he built a statewide network of regional, racially and economically diverse public schools while successfully advocating for sweeping changes to state education policy.

Professor Watson is the co-director of the AI Lab. Professor Watson’s job includes coordinating the technical side of the lab, helping students learn and build machine-learning-based software, putting them in touch with folks in research and industry, and providing project management during the summer for their specific projects. Before becoming a professor, Watson worked at IBM research where he focused on AI for education.

Digital Journal: What are the aims of the Masason Foundation/Minerva University AI Research Lab? 

Mike Magee: At Minerva University, we take pride in preparing the next generation of leaders and world changers who are determined to have a positive impact on their industry of choice. As a part of Minerva University, the Masason Foundation/Minerva University AI Research Lab has an ethical orientation to the world. 

The Lab is a collaborative AI research and start-up incubation program for a select group of  fellows to learn about and create AI-based technologies. The program includes an academic year of independent research and a summer internship at AI-focused venture capital DEEPCORE Inc. in Tokyo, where students get first-hand experience of what it’s like to develop and launch new AI-based products. 

Through the program, students are encouraged to consider how AI might be brought to bear on solving the world’s most intractable challenges. Since its launch in 2019, projects and start-up concepts emerging from the lab have included ideas to address challenges around climate, nutrition, and women’s reproductive health. The lab is unique from other university startup incubators in that students, mentors, and faculty can be anywhere geographically and collaborate together, at a high level.

DJ: How advanced is AI today?

Patrick Watson: Since the ’30s we’ve been writing new instructions and building faster computers. The first major wave of AI took place in the 1970s when good programmable computing hardware was widely available. The second major wave of AI technologies that are happening right now are mostly driven by the widespread availability of digitized text and images. Most of the machine learning algorithms being used now were created in the 70s and 80s. Still, there weren’t enough data or fast enough computers to iterate on the algorithms or to make them do anything interesting.

Now that we have those better technologies it’s more possible to remix different algorithms and models that do interesting things. On that note, the most advanced thing about AI today is how accessible and available it is. There are a ton of supporting, well-documented, open-source machine learning libraries that are readily available and free. If you’re willing to learn statistics, it’s very, very easy to work with the technology. 

DJ: What are the future possibilities for AI? 

Watson: When it comes to the future of AI, the possibilities are limitless. There are a few big forces that are reasonably predictable in the next five to ten years. The first is the accessibility I mentioned before. I believe more and more people will work directly with AI technologies in the next decade. I also believe that virtually every device or piece of software will have a machine-learning component, and it will be virtually impossible to function or hold down a job that doesn’t involve AI. That said, those technologies will not look very spectacular. A good example would be someone in construction using a drone to run wiring. Technically, there’s an AI component there, the drone has some pretty clever software to fly itself, but for the end user that’s mostly invisible.

When people imagine “AI” they usually imagine something that’s based on human intelligence, and for that matter, on a really narrow conception of human intelligence that leaves out things like “keeping your balance,” “reconstructing past experiences based on minimal details.” All of those core functionalities need to be worked out and implemented in a variety of systems, but usually, these innovations aren’t marked as meaningful progress because they don’t fit into the public’s prior conception of what AI is. The second major AI push which will happen quasi-simultaneously will likely depend on research in novel sensors to collect some data types that we currently have very limited access to. This will look more “human-like” because it will be implementing things like physical strain and body movement, perception of textures and chemical compounds, etc. 

DJ: Why do some people ‘fear’ AI?

Watson: Because fear is a popular mode for selling media products, especially fear of the unknown. Very few of the fears about AI come from anyone who has any meaningful contact with the discipline. It’s not that there aren’t reasonable concerns around AI technologies, mostly related to how our existing institutions might interact with them, but sober policy conversations about how to make, say, the passport system more robust to digitally generated credentials don’t sell many papers. That’s one of the major goals of a program like our AI Lab, to incubate practical experience so they can better understand the nature of the claims that are being made in the discipline and whether they should be taken seriously.

DJ: What are the ethical concerns with AI today?

Watson: The two big ethical concerns that get the most public discussion are AI trust and interpretability and AI safety.

AI trust and interpretability refers to how easy it is to understand what AI systems are doing and why. Because these systems rely on historical patterns of data and complex algorithms, they tend to make somewhat arcane decisions that are difficult to explain in language. I don’t think the systems are particularly hard to interpret from a mathematical or statistical standpoint. The engineer who built the system can usually tell you exactly what it’s doing, but they’ll have to use equations to do so. This is a problem for people in positions that require articulacy and personal interaction. Those people are pushing the narrative that AI needs to be interpretable in human language so that they can keep their jobs. This happens both within tech companies, and when tech companies work with external clients who don’t trust the algorithms to make decisions for them. So I think the shift here is mostly going to be one driven by adjustments in the education of humans rather than changes to AI. 

AI safety is the broad category of techniques that try to prevent robots from killing all humans like they do in sci-fi movies. This often involves the development of algorithms like reverse reinforcement that have some mathematical guarantees that make AI produce behaviors more in line with what humans actually do (this is sometimes called the “value alignment” problem). It’s much rarer to see practical defenses like developing more modular AIs or funding actual groups of scientists who could fight an outbreak of computational contagion like the CDC, because AI safety is largely funded by defense department agencies, who are mostly interested in furthering technological progress and in developing AI weapons. These agencies providing the funding have an influence on what projects are selected. Again AI safety is a scary idea, and scary ideas get a lot of attention and funding because they spread more readily through popular media. 

DJ: How can bias be minimized with AI development?

Watson: From a statistical perspective, any success a machine learning algorithm has is because of bias. But “bias” in public consciousness usually means “under- or over-representation of historically disadvantaged groups” rather than “statistical modeling power.” Algorithmic bias is concerned that there are limits to the algorithms because of the limited perspectives of the technologists making them, due to both underrepresentation of certain groups in the pool of technologists, and because of limits on the knowledge of those technologists. I think it’s very important to point out the power dynamics at play here.

AI can discriminate in exactly the same way as its creators, not because the creators put in a discrimination algorithm, but because the past included discrimination. Data bias is concerned that the data we train the models on isn’t broadly representative of diverse perspectives and populations. Data will never be broadly representative of diverse perspectives and populations–data is static, and perspectives and populations are ever-evolving. We can never collect “enough” data to represent the dynamic, changing, and unstable world we live in. Reducing data bias isn’t a useful thing for existing power structures. That’s why it’s a popular AI ethics issue.

DJ: Within which areas can AI be harnessed for the social good?

Watson: As is probably evident from the previous four answers: one of the most useful ways AI can further social good is that it’s a form of intelligence that’s relatively unresponsive to social opprobrium. The bots generally see patterns in data pretty clearly. If they’re behaving outrageously, that’s a reflection of the environment they’re created in. This is usually a positive thing because it prevents their creators from using social capital. It makes it hard for someone to simultaneously have the technical power that derives from these new AI technologies, and power derived from traditional networks of wealth and social connections. Spreading out power is generally a good thing, so I’m looking forward to a world where technical skills are a source of influence beyond those traditionally embedded in social networks. The big issue, of course, is access and education. If technical skills and AI technologies are only obtainable by the already influential, I think such technologies will just be captured by the powerful.

This is why I teach AI at Minerva University and why the AI Labs program is so paramount. They give us an opportunity to spread important knowledge around to people from all sorts of backgrounds all over the world. I remember a couple of years ago I sent a letter of recommendation from a post office in New York to the Polish Embassy in Tokyo to recommend a student for graduate school in Seoul. She was highly qualified, but how on earth would that graduate program know about her without Minerva? I really think that our University serves an important purpose in the world of connecting smart, technical folks to the systems they wouldn’t otherwise have access to.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Tech & Science

Awareness and proactive measures are the best defences against the evolving tactics of cyber attackers.

World

A month before the UN COP29 climate summit in Azerbaijan nations remain at odds over how to deliver much-needed finance to poorer countries -...

Social Media

Hundreds of apparent pro-Russian bot accounts on X are pushing US election misinformation.

World

Southeast Asian leaders will hold summit talks Thursday with Chinese Premier Li Qiang, with the disputed South China Sea on the agenda - Copyright...