Elon Musk and other prominent technology leaders have called for a pause on the ‘dangerous race’ to make AI as advanced as humans in an open letter urging the sector to cease training models more powerful than GPT-4.
Is this the correct approach? Should be fearing ChatGPT or embracing it? Looking at this issue is Kamales Lardi (author of The Human Side of Digital Business Transformation). Lardi is listed in the “Top 10 Global Influencers & Thought Leaders in Digital Transformation” (Thinkers360) and “Top 50 Women in Tech Influencers 2021” (The Awards Magazine), and she has set out her thoughts and vision to Digital Journal.
According to Lardi there is some support for the Musk view: “In my view, the concerns raised may be legitimate to some extent – potential for generative AI to become competitive in general tasks; acceleration of misinformation if AI-based systems are allowed to access information openly; lack of intelligent regulation around the application of AI; limited understanding of how AI-based systems could fully function and utilize data, sometimes even by their own creators etc.”
While there are concerns, Lardi is critical of the Musk-led approach. She states: “The open letter comes across more as fear-mongering rather than addressing the critical issues relating to AI development.”
As well as side-stepping the more important technical aspects, Lardi is also critical of the business leaders for being “less concerned with the ethical implications of AI”, as indicative of recent layoffs such as at Microsoft, which let go its team established to ensure ethical, responsible and sustainable AI innovation.
Lardi also notes that “Google let go of several leading researchers in ethics and AI back in 2020).”
Lardi is also concerned about the call to put the brakes on development, stating: “I disagree that the solution is to ‘immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.’”
This is because “The deep-rooted issues that exist in the tech industry are much broader, and these would require more than six months to resolve.”
She is also uncertain about the motives, pondering: “The timeline is suspicious to me, perhaps Musk needs another 6-months to launch his version of AI in the market?”
Lardi’s preferred approach is with enacting the following:
Regulation
Ensure regulators are educated and able to set up intelligent governance for AI development without stifling innovation potential.
She states: “After all, AI-based solutions are creating transformative advancements in critical areas such as healthcare, financial services, education, manufacturing, and agriculture, to name just a few.”
Collaboration
Lardi is calling for greater collaboration across the industry by building an ecosystem of key stakeholders. By this she recommends “including tech companies, industry leaders, corporates, thought leaders and experts, as well as regulators and customers, to ensure a range of people understand and are involved in the development of AI. Similar consortiums in other tech areas such as blockchain exist.”
Transparency and diversity
Lardi calls for the creation of “open and transparent dissemination of information relating to the sources and use of data for AI-based systems”, supporting by “an increase in diverse teams of people are involved in development and testing AI-based systems (diversity of thought, avoid groupthink, challenge decisions, ensure ethical views etc.).”
Blockchain could be a potential solution
A solution is with blockchain, as Lardi elaborates: “The challenges in AI with data provenance and transparency could be addressed with blockchain technology that offers these specific capabilities. The convergence of these top techs could result in powerful tech solutions that offer fast and sophisticated generative capabilities while still ensuring data is used in a transparent and monitored way.”