Facebook will use the technology across its three primary properties, Facebook, Messenger and Instagram. The announcement has been welcomed as a significant mark of progress in the fight against revenge porn, a problem plaguing social networks.
Facebook’s new system is activated the first time an image is reported. If you see a photo that looks like it was shared without consent, you should press the “Report” button to alert it to Facebook. The post will be reviewed by the company’s Community Operations team who’ll assess it and determine whether it should be removed from Facebook.
If it’s found to contravene Facebook’s platform rules, the image will be removed. It will now also get added to the company’s photo-matching system. If a person tries to upload the photo again, whether it’s to any of the three services, the tech should be able to detect the image and prevent it from being shared. The user will be alerted about the incident and warned not to try repeat the upload.
An appeals process will be available in the event the system gets something wrong. Users can already appeal when an image is reported for non-consensual sharing. Facebook usually disables accounts if they are used for sharing intimate images without permission.
Facebook said it has worked with over 150 safety organisations in the past year to determine how it can improve its service to protect users. The feedback from the groups, based in regions around the world, led to today’s launch. It joins existing tools including information on how to respond to revenge porn and guidance to follow when reporting material that violates Facebook policies.
“These tools, developed in partnership with safety experts, are one example of the potential technology has to help keep people safe,” said Facebook. “Facebook is in a unique position to prevent harm, one of our five areas of focus as we help build a global community.”
Facebook’s announcement has been welcomed by experts. However, the photo-matching system still can’t flag up images the first time they’re uploaded. The company remains reliant on users reporting posts before it becomes aware of them. The system only helps to stop the proliferation of images across social platforms, ensuring images uploaded without consent can’t be sent over Messenger or on Instagram.
The photo-matching technology isn’t true artificial intelligence, although this could be a part of future Facebook safety mechanisms. AI is already used to detect offensive images but lacks the necessary contextual understanding to make the right decision in cases of revenge porn.
The new tools have already been enabled across Facebook’s services. They’re operated by the company’s specially trained support staff who are responsible for ensuring the safety of the company’s users. Facebook said it remains committed to creating a protected online environment and will continue to develop its technology in the future.