It was always going to be the case that biometrics were going to be weaponized. Biometrics are just baseline coordinates any grade-schooler could figure out. Having a system of identification based on these metrics is a no-brainer.
These problems were and are inevitable. It was always going to happen that someone would figure out a way to abuse biometrics. Deepfakes are a case in point. Simple hacks to hide or add biometrics are further examples. Biometrics are not infallible, or anything like infallible.
Note: To many commentators including me, this situation is not exactly new. I’ve spent at least 10 years specifically not mentioning the highly risky issues in biometrics. The potential for massive abuse was obvious, and could actually be a lot worse. I just didn’t and still don’t want to do a handbook on How to Create Yet Another Tech-Based Train Wreck. The question remains why so much intrusive technology isn’t considered far more thoroughly before inflicting itself on the world.
Managing the legal mess and privacy risks caused by biometrics, however, are anything but simple. Australia is looking at draft laws to manage these issues, and the arguments are getting more complex by the minute.
The current situation and working theories are:
- Privacy is the defining issue in the draft model for regulation.
- The Privacy Act, the dominant legislation, is generally (and usually when it comes to any type of tech) considered inadequate.
- Use of biometrics by law enforcement, commercial security, and other institutions is a very grey, untrustworthy-looking area.
- The various Australian State and Commonwealth Evidence Acts may or may not be able to handle biometrics, which are contestable anyway.
Against which:
- Biometrics, like any information, can also be currency. The vast amounts of public and private surveillance images and information can be hacked and misused.
- “Face search engines” effectively create databases of faces. The good news is that these things are search engines, and can be as reliable and/or messy.
- These biometrics can be stolen, sold, and manipulated pretty easily. So can the live surveillance stuff, for adding or erasing people.
- Biometrics aren’t even legal in some jurisdictions. Facial recognition on Facebook started in 2011 and a decision was made to shut it down in 2021 due to compliance and other issues. Billions of images are to be erased.
This is not to downgrade the original basic arguments for regulation. Privacy IS undeniably and rightly a critical issue. The legalities are way too blurry.
What’s bothering me is that this potentially very high-value information isn’t physically protected. It needs to be under some sort of lock and key. The draft legislation allows for a warrant environment for facial recognition evidence. That’s good.
The future of ever-expanding forms of biometrics
Remember, this is just faces we’re talking about at the moment. Humans give off a lot of other recognition signals, like smells, hormonal signals, etc. What if these things get the same sort of scrutiny as biometrics?
This can go a lot further and fast. I also dread to think what might happen if “emotional recognition” gets the same sort of status as facial recognition. What would happen in a mental health case with botched or tampered biometrics information?
So far we’ve established the obvious with a lot of caveats:
- The law must be able to cover the use of biometrics.
- The tech definitely isn’t infallible.
- It can be misused.
- It can be valuable to bad actors.
- The scope is unlimited for other types of individualized recognition, reliable or not.
Inspiring as all this may sound:
Too many ideologies use surveillance as a form of direct oppression. China’s much-bitched-about social credit system is the most advanced form of this type of surveillance. Biometrics are real time agents in this process.
It’s more than theoretically possible to create a lot of fake “evidence” using the huge amounts of data collected about everyone on Earth. This is info-LEGO, in effect. You could create John Smith, a nice guy and expediently charged mass murderer, out of thin air with all this data, right down to fake DNA. So the actual mass murderer goes uncharged.
In my experience, the most cynical and untrusting response is the best way to look at the risks of most technologies, including the wheel, if it ever gets invented.
Note: It is true that you can use surveillance data as a defense. You can prove where you were simply by subpoenaing surveillance to show where you were at a given time, etc. The trouble with that is you can also use deepfake tech to do the same thing. So you’ve already got a quality of information issue built in to biometrics. It’s tricky and likely to get a lot trickier.
Solution? Probably not, but worth considering.
There has to be a point at which “You’re not allowed to collect this or that data” gets a word in edgewise.
Any sort of regulation can include some basic requirements:
- How much of this surveillance data is actually required?
- Can you put a statutory limitation on the retention of biometrics?
- Can you demand access to any or all surveillance data affecting you? (In court, yes, but maybe not as a private citizen outside a court.)
Ironically, there IS a solution, courtesy of all people, global intelligence. Intelligence surveillance is a massive thing with an unquantifiable data load. The issue here is that most of that vast range of information isn’t of the slightest use to the intelligence agencies.
They have to vet this unspeakable ongoing mass of data to make useful information findable and accessible. So a functional edit is not only possible but essential. This is actually a way of looking at data, not a systemic thing. Why not apply that principle to basic surveillance? The data exists but doesn’t have to be a risk to anyone.
Another solution could be AI-regulated management of data in verified compliance with law. It’s efficient, quick, and based on the healthy theory that you only need to access relevant information. The needle in effect finds the haystack.
This regulation is much-needed. It’s now essential. The current draft, like all draft legislation, can’t be perfect. What’s so important is that quite literally every single human being on Earth is affected by what happens with surveillance data.
The Australian laws may be one of the first sparks of rational thinking on the subject, too. It’s absurd and dangerous that so much obsessively obtained data can be such a risk. Watch this subject, because you may be the subject.
_____________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.