Op-Ed: AVATAR AI Virtual border guards — Backed by Homeland Security

Posted May 16, 2018 by Paul Wallis
Yep, virtual border agents, kiosks using AI and acting as lie detectors are already well advanced. Thrilled so far? It gets a lot weirder. The sheer level of faith in the idea is stranger than the tech.
A U.S. Border Patrol agent checks the identification of a motorist at a checkpoint at United States ...
A U.S. Border Patrol agent checks the identification of a motorist at a checkpoint at United States and Mexican border
With permission by Reuters / Jeff Topping
The virtual border guards are designed to detect “deception” with an alleged 80% accuracy. Funded by the Department of Homeland Security, the technology is well advanced and at the brink of deployment. These things, which are basically kiosks, are called AVATAR, or Automated Virtual Agent for Truth Assessments in Real Time. Catchy, eh?
You could be forgiven for thinking this is just another toy for the surveillance nuts. The pity of it is that this tech, deployed or not, is likely to be the basis for future developments. (There are other similar technologies in use around the world, but AVATAR is based on a much higher band of security.)
The strangest, and perhaps most dangerous, aspect of AVATAR is the unseemly haste in the move to using AI tech against people. This is a first Freudian slip of epic proportions for the Thought Police nuts in building their ultimate toy, and it defines at least one not very reassuring line of AI development unequivocally.
AI is coming, and coming fast. It’s already a major event in the finance sector, and it’s likely to be running global finance operations in the future. The difference with AVATAR is that it has high level backing, and directly impacts human beings.
The minute you accept giving AI authority over a human, you’re on the way to a very different world. It won’t be much of a world. It’ll be as loveless as Washington DC and even uglier than it is now.
Why AVATAR Matters
The much bigger current news about AVATAR is its potential by definition to intrude on human life at just about any level. This is a step in to a continuum of progressively automated, inhuman, security on any number of levels, in any number of contexts.
1. What if AVATAR or one of its descendants is introduced in to the workplace? It’s not as though US employees in particular aren’t under an incredible amount of surveillance already. This could make the American worker’s hell a lot worse.
2. What if AVATAR becomes part of the Social Security network? This is another highly monitored, much-loathed system. Machine bureaucracy is worse than human, in some ways, and much more pedantic.
3. What if AVATAR is used in the voting system, with a predictable bias? What could you do about it? What if some AI redneck is monitoring voters?
4. The possible scenarios just get more complex. AVATAR may well be an AI with a mission, but what price the mission, and who’s watching the risks, if anybody?
5. More to the point -What else which can be used to hit human beings can be developed from AVATAR? All technology has been abused without exception throughout history, and this is technology of a very adaptable kind. It can be used in just about any social or business context. It’s likely to be a curse like nothing before it, if there are no legal safeguards.
Don’t expect this very bad idea to go away any time soon. It’s already taken hold, and there’s big money in it. AVATAR is just getting started. The proponents aren’t being exactly critical of anything to do with AVATAR. They’re virtually drooling. And check out the numbers - Anyone who accepts an 80% accuracy rate is either an optimist or a fool. Some numbers need to be questioned properly, not rhetorically, and that’s obviously not happening.
Maybe AVATAR is being researched by well-meaning people who are trying to develop their AI baby. That’s understandable. But starting a Hydra-headed, multi-contextual AI vs humans situation is a very different ballgame. When AI moves in, the question of whose side the AI is on is the one that matters. How do you guarantee rights, due process, or anything else, using a system like this? Particularly if the AI is considered infallible?
Don’t hold your breath for any kind of humanistic assessment of AVATAR any time soon. To borrow a quote – O Brave New World which has such utter fools in it.
Post script: This is one of the few articles I’ve ever written where the arguments simply boil over exponentially in all directions. I consider AVATAR as a serious digression from the higher values of AI. Its learning is based on a need to find guilty parties, to start with. With that sort of bias, which is already creating waves in AI research, how trustworthy is the AI? Or the agency running it?
AI could be a huge benefit to humanity; what I’m seeing is a huge lack of understanding of AI, and its possible evolution. The types of development lack vision. AI is being seen as a tool, not what it actually is, a hyper-learning asset which can be used to get things right and learn more using the efficiency of AI.
The acceptance of AI has another negative, and it’s a grim prospect indeed - If humans no longer understand or criticize that learning and the learning process, AI will be effectively directionless and extremely vulnerable to abuse. The result cannot be good.