Connect with us

Hi, what are you looking for?

Social Media

‘Musk’ do better: Why Elon and co. are wrong about ChatGPT

The deep-rooted issues that exist in the tech industry are much broader, and these would require more than six months to resolve.

Elon Musk making Twitter more open to hateful, harmful or dishonest tweets could be ramping up financial pressure on the tech firm the Tesla chief bought late last year in a $44 billion deal
Elon Musk making Twitter more open to hateful, harmful or dishonest tweets could be ramping up financial pressure on the tech firm the Tesla chief bought late last year in a $44 billion deal - Copyright AFP Miguel MEDINA
Elon Musk making Twitter more open to hateful, harmful or dishonest tweets could be ramping up financial pressure on the tech firm the Tesla chief bought late last year in a $44 billion deal - Copyright AFP Miguel MEDINA

Elon Musk and other prominent technology leaders have called for a pause on the ‘dangerous race’ to make AI as advanced as humans in an open letter urging the sector to cease training models more powerful than GPT-4.

Is this the correct approach? Should be fearing ChatGPT or embracing it? Looking at this issue is Kamales Lardi (author of The Human Side of Digital Business Transformation). Lardi is listed in the “Top 10 Global Influencers & Thought Leaders in Digital Transformation” (Thinkers360) and “Top 50 Women in Tech Influencers 2021” (The Awards Magazine), and she has set out her thoughts and vision to Digital Journal.

According to Lardi there is some support for the Musk view: “In my view, the concerns raised may be legitimate to some extent – potential for generative AI to become competitive in general tasks; acceleration of misinformation if AI-based systems are allowed to access information openly; lack of intelligent regulation around the application of AI; limited understanding of how AI-based systems could fully function and utilize data, sometimes even by their own creators etc.”

While there are concerns, Lardi is critical of the Musk-led approach. She states: “The open letter comes across more as fear-mongering rather than addressing the critical issues relating to AI development.”

As well as side-stepping the more important technical aspects, Lardi is also critical of the business leaders for being “less concerned with the ethical implications of AI”, as indicative of recent layoffs such as at Microsoft, which let go its team established to ensure ethical, responsible and sustainable AI innovation.

Lardi also notes that “Google let go of several leading researchers in ethics and AI back in 2020).”

Lardi is also concerned about the call to put the brakes on development, stating: “I disagree that the solution is to ‘immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.’”

This is because “The deep-rooted issues that exist in the tech industry are much broader, and these would require more than six months to resolve.”

She is also uncertain about the motives, pondering: “The timeline is suspicious to me, perhaps Musk needs another 6-months to launch his version of AI in the market?”

Lardi’s preferred approach is with enacting the following:

Regulation

Ensure regulators are educated and able to set up intelligent governance for AI development without stifling innovation potential.

She states: “After all, AI-based solutions are creating transformative advancements in critical areas such as healthcare, financial services, education, manufacturing, and agriculture, to name just a few.”

Collaboration

Lardi is calling for greater collaboration across the industry by building an ecosystem of key stakeholders. By this she recommends “including tech companies, industry leaders, corporates, thought leaders and experts, as well as regulators and customers, to ensure a range of people understand and are involved in the development of AI. Similar consortiums in other tech areas such as blockchain exist.”

Transparency and diversity

Lardi calls for the creation of “open and transparent dissemination of information relating to the sources and use of data for AI-based systems”, supporting by “an increase in diverse teams of people are involved in development and testing AI-based systems (diversity of thought, avoid groupthink, challenge decisions, ensure ethical views etc.).”

Blockchain could be a potential solution

A solution is with blockchain, as Lardi elaborates: “The challenges in AI with data provenance and transparency could be addressed with blockchain technology that offers these specific capabilities. The convergence of these top techs could result in powerful tech solutions that offer fast and sophisticated generative capabilities while still ensuring data is used in a transparent and monitored way.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Tech & Science

Digital Journal announced as official media partner for Innovation Week in Calgary.

Tech & Science

The Nobel Prize in Physics was awarded to two scientists for discoveries that laid the groundwork for the artificial intelligence.

World

Kamala Harris has taken a slim lead over Donald Trump in the US presidential race, a new poll showed.

World

Meanwhile, just get out, now. This thing obviously means business.