Connect with us

Hi, what are you looking for?

World

Worried about ChatGPT-4 and cybersecurity? Here are the in-built safety features

OpenAl has published guidelines for responsible use of their technology, which outline best practices and ethical considerations.

An open letter, signed by more than 1,000 people, was prompted by the release of GPT-4 from San Francisco firm OpenAI
An open letter, signed by more than 1,000 people, was prompted by the release of GPT-4 from San Francisco firm OpenAI - Copyright AFP/File STR
An open letter, signed by more than 1,000 people, was prompted by the release of GPT-4 from San Francisco firm OpenAI - Copyright AFP/File STR

How powerful is ChatGPT set to be and what are the key features of the latest (fourth) version?JP Perez-Etchegoyen, CTO of Onapsisexplains to Digital Journal the advanced features of the new intelligent, communicating AI.

In an earlier article, Perez-Etchegoyen provided a warning in terms of how cybercriminals are seeking to use ChatGPT for fraud purposes. While this threat remains, there are some important security advancements with the new iteration.

With these security enhancements and essentials, Perez-Etchegoyen says: “With the latest release of ChatGPT (ChatGPT-4), OpenAI has made available the latest and most powerful conversational model, also introducing more controls to prevent its misuse.”

In terms of an example, Perez-Etchegoyen asked ChatGPT:

 “What could prevent malicious actors using ChatGPT with malicious intent”.

This was the answer:

“There are several measures that could prevent malicious actors from using ChatGPT with malicious intent.”

Drawing these out and explaining how these features, work, Perez-Etchegoyen provides the following features:

  1. Access controls: OpenAl, the organization behind ChatGPT, has implemented access controls to limit who can access their API and use their technology. By restricting access, they can prevent malicious actors from using their technology for malicious purposes.
  • Monitoring and detection: OpenAl monitors the usage of their technology to detect and prevent malicious activities. They use machine learning models to identify patterns and anomalies in usage that could indicate potential misuse.
  • Ethical guidelines: OpenAl has published guidelines for responsible use of their technology, which outline best practices and ethical considerations. By following these guidelines, users can ensure that they are using the technology in a responsible and ethical manner.
  • User education: Education and awareness can help prevent the misuse of technology.

OpenAl provides resources and educational materials to help users understand the capabilities and limitations of their technology, as well as the potential risks of misuse.

  • Legal consequences: There are legal consequences for using technology like chatGPT for malicious purposes. Governments and law enforcement agencies have laws and regulations in place to punish individuals and organizations that use technology for illegal activities.

Back to ChatGPT’s own assessment: “Overall, preventing malicious actors from using ChatGPT with malicious intent requires a combination of technical controls, ethical guidelines, user education, and legal consequences. It is important to use Al language models like ChatGPT responsibly and ethically to prevent the technology from being misused for harmful purposes.”

According to Perez-Etchegoyen: “ChatGPT went ahead and enumerated all of the measures that OpenAI has taken in order to prevent its misuse.”

In addition, OpenAI in the release blog of ChatGPT-4, have explained how this model is supposedly safer than its predecessors: “We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations”.

This provides Perez-Etchegoyen  with a level of confidence, since the information provided “Indicates that the right measures continue being improved to avoid its misuse, but as we all know the bad guys will continue to find ways to overcome those limitations as the model is more widely used and understood.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

World

Forget it. Change the subject and move on.  

Business

Venice's 60th Biennale international art show has opened its doors, exploring humankind's relationship with the fragile planet.

World

Over the rolling hills of Hebden Bridge in England, a gigantic painting interrupts the placid green pasture with a call to action.

Tech & Science

The arrival of ChatGPT sent shockwaves through the journalism industry - Copyright AFP/File JULIEN DE ROSAAnne Pascale ReboulThe rise of artificial intelligence has forced...