Connect with us

Hi, what are you looking for?

Tech & Science

Setting out the case for AI regulation

Establishing trust is crucial for fostering innovation, productivity, and a positive culture of safety in the workplace.

Google is giving its Bard chatbot a major artificial intelligence boost as ChatGPT-maker OpenAI deals with the aftermath of a boardroom coup that saw chief executive Sam Altman fired then rehired within a span of days
Google is giving its Bard chatbot a major artificial intelligence boost as ChatGPT-maker OpenAI deals with the aftermath of a boardroom coup that saw chief executive Sam Altman fired then rehired within a span of days. — © GETTY IMAGES NORTH AMERICA/AFP WIN MCNAMEE
Google is giving its Bard chatbot a major artificial intelligence boost as ChatGPT-maker OpenAI deals with the aftermath of a boardroom coup that saw chief executive Sam Altman fired then rehired within a span of days. — © GETTY IMAGES NORTH AMERICA/AFP WIN MCNAMEE

The European Union’s recent artificial intelligence act will establish the future groundwork for all AI regulation, such as banning certain AI uses, introducing transparency rules and requiring risk assessments for AI systems deemed high-risk.

With the European legislation, the aim is for all AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. The AI Act relies on a risk-based approach, which means that different requirements apply in accordance with the level of risk.

A general recommendation with the legislation is that artificial intelligence systems should be overseen by people, rather than by automation, in order to prevent harmful outcomes, especially where the is access to services involved.

The technology world is already on the fringe with privacy and the introduction of AI can be ‘terrifying’ for privacy without these rules in place.

Yet we must remember that AI is not a natural phenomenon that is happening to us, but a product of human creation that we have the power to shape and direct.

Melih Yonet, Head of Legal at Intenseye, a category-defining environmental health and safety (EHS) platform powered by AI, recognises the importance of AI safety and privacy, as he explains to digital Journal.

Yonet explains: “We have long recognized the critical risks posed by technologies that violate privacy or other personal rights – especially in the context of EHS – which is exactly why we’ve irreversibly embedded privacy-by-design principles into our workplace safety solutions and use techniques such as pseudonymization and 3D-anonymization.”

Illustrating this, he adds: “We do, and will continue to ensure that our product is compliant with the applicable legislation and the risks arising from its operations are minimal.”

If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Yet the new legislation aims to counteract this. Plus, society is gradually becoming more reliant upon AI.

Hence, there also needs to be trust with AI as it becomes more prevalent, which Yonet characterises as: “Establishing trust is crucial for fostering innovation, productivity, and a positive culture of safety in the workplace.”

Beyond this, Yonet says, there is: “Distinguishing and communicating content that has been AI-generated content is essential for mitigating misinformation and promoting ethical AI usage in the workplace and beyond.”

Drawing on his own company’s offering, Yonet puts forward: “All alerts, visual analytics, and other outputs of Intenseye’s AI-powered solutions have always been – and will continue to be – clearly marked as such.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

World

US immigration authorities will carry out mass arrests of undocumented immigrants across the country on Tuesday.

Tech & Science

Millions of people can potentially have their data stolen because of a deficiency in Google’s “Sign in with Google” authentication flow.

Social Media

TikTok says it will "go dark" in the United States on Sunday, threatening access to the app for 170 million users.