AI takes technological innovation to new heights, from climate change initiatives to productivity in day-to-day work. Yet, these advances come with risks, and computer scientist and Stanford professor Dr. Fei-Fei Li says these can be mitigated by taking a human approach.
She chatted with McKinsey Global Publishing’s Mike Borruso about human considerations and company best practices from her new book, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI.
Here are some highlights from the interview:
Tomorrow’s AI must prioritize “human dignity” and “well-being”
“We also recognize that the most important use of a tool as powerful as AI is to augment humanity, not to replace it. When we think about this technology, we need to put human dignity, human well-being — human jobs — in the center of consideration. That’s the second part of human-centered AI.“
“[Artificial] intelligence is nuanced; it is complex. We are all excited by a large language model and its power. But we should recognize human intelligence is very, very complex. It’s emotional, it’s compassionate, it’s intentional, it has its own blind spots, it’s social. When we develop tomorrow’s AI, we should be inspired by this level of nuance instead of only recognizing the narrowness of intelligence. That’s what I see as human-centered AI.”
AI’s “catastrophic risks” warrant guardrails
While noting that she isn’t coming from a place of “doom and gloom,” Li says the need for guardrails is urgent, as some of the risks are “catastrophic.”
“For example, disinformation’s impact on democracy, jobs and workforces, biases, privacy infringement, weaponization: these are all very urgent. A more urgent issue that many don’t see from a risk point of view — but I do — is the lack of public investment. We have an extreme imbalance of resources in the private sector versus the public sector. That is going to bring harm.”
“I think putting guardrails and governance around the powerful technology is necessary and inevitable. Some of this will come in the form of education. We need to educate the public, policymakers, and decision-makers about the power, the limitations, the hype, and the facts of this technology.”
Whether you’re working in health care, finance, insurance, agriculture, or any other industry, AI is making an impact. So what can your company take away from this interview for their own AI journey?
- Educate your employees on AI technology, economic impact, and ethics considerations.
- Hire for AI skills over credentials.
- Establish AI guardrails and norms that reflect local laws and your company values.
Read or listen to the whole interview here.
