The new face recognition software being developed in Australia is far more accurate than its predecessors, and can be operated anywhere. This technology can also use low quality video data, so recognition must be pretty good.
The Sydney Morning Herald went looking for more information and came up with some more details on the software
that’s used to identify people:
University of Queensland professor Brian Lovell, project leader at federal government body NICTA's advanced surveillance project, earlier this month won a global Asia-Pacific ICT Alliance award for his team's five-year project, which he says solved the "holy grail" problem of face recognition.
For the first time, Lovell says he and his team have been able to use grainy, low quality CCTV video footage to identify individuals from databases and even find and track people as they move around an area.
(The SMH also came up with a video with no sound on it, for some reason, see the link. Art it may be, informative it could be much more so.)
This is “policing” technology. Which would presumably explain why Google, Facebook and Apple are developing similar and equally risky to the public technology.
There are obvious police uses for the technology, but that’s not really the issue. The problem is that anyone could get hold of it, hack it, or otherwise abuse it, and put people at risk.
Apparently this half-arse idea is based on the theory that there’s any particular legal right to identify people. There isn’t. Unless it’s for a lawful purpose or within the reasonable requirement for business, even asking someone their name is theoretically a breach of privacy, and they don’t have to give it to you unless they feel like doing so.
The human face contains as many unique identifiers as fingerprints or DNA. These are mathematical relationships, and even identical twins can’t fool the equations.
Now, let’s consider the abuse options:
1. Tracking people’s movements
2. Investigating relationships
3. The odd hobbyist murder
Fun so far, isn’t it, and these are just generic descriptions.
Surveillance cameras are useful. The problem here is that they can positively identify people and put names to faces accurately. That’s a very different situation, and it’s essentially illegal, at least in principle, because of the level of implied intrusion.
There is a way around this situation, and it’s based on PIN numbers. Strictly speaking, a PIN number, like other ID, is personal property. Identifying parameters can also be property. Legally that would mean there’s no right to access by anyone but authorized people. At least that would theoretically deal with the privacy issues.
The other issues, however, are all about real time physical security, and that’s a lot harder to manage. Thanks to our crime-ridden society, risks from people that aren’t at all worried about laws are far higher than at any time in history. Identity theft is a major issue.
(Ironically, the face recognition software could actually help prevent ID theft. Faces aren’t duplicable, at least not at this time. Your unique equations could prevent someone ripping you off.)
They wouldn’t prevent you from being targeted very specifically, however. Anyone with access to the software could go looking for you with a pretty good chance of finding you online, on Facebook, etc. or wherever else they can get access to enough visual information for a positive ID check.
And the defence against abuse is…?
Rhetoric, so far, and some rather vague, unsettling statements:
The editor-in-chief of The Guardian, Alan Rusbridger, in his 2011 Orwell lecture earlier this month, revealed that he had a conversation with a "senior Google figure" who was musing about the potential of Google face recognition software, "whose effects are so far reaching the company can't quite yet decide what to do with it".
Rusbridger said the Google exec told him the software could match a face to a name with any images sitting anywhere on the web, as long as one match had been made.
"What made this so troubling he said, is that digital spiders could then crawl the web and find every picture in the public domain and match it with an identity," he said.
"So the moment one match is made it would be possible to scan every street or crowd scene over several decades to see where a particular individual was. Link that to the sort of all-pervasive CCTV systems we have in this country [Britain] and you have a formidable infrastructure – current, but also historical – for total surveillance."
Reassuring, isn’t it? If you’ve ever had a photo online or anywhere else, it can be used as a basic reference.
This is worse than a police state in terms of its potentials for affecting people’s lives, and it’s hard to see too many options for positive effects. Anyone could be targeted, by anyone, for any reason, and the opportunities for abuse are practically endless.
I did an article on DJ a while back about Facebook’s new face recognition software
, with a lot of reservations about the potentials for abuse. This is open slather, uncontrolled, uncontrollable, and any legal remedies would be a long way behind the events.
If you’re developing this technology, think about this-
If your technology, website or a licensee of your technology, provides material support for a crime or invasion of privacy, you could be facing the most indefensible class actions and other litigation in history. You can’t say this technology doesn’t provide positive identification of individual, because that’s exactly what it’s designed to do, and that, by definition, can be construed as an invasion of privacy unless for lawful purposes. That means as defined by law, as in actual legislation. Otherwise, it's an offence in fact.
This software must
come with a lot of safeguards, because it’s unsafe by definition. People could get killed. The only way to make this technology safe is to come up with a trustworthy rulebook for access to the information it provides.