Op-Ed: Incoming – A.I., the cyber arms race and a new security culture

Posted Sep 10, 2019 by Paul Wallis
The cyberwar has been ongoing for quite a while, since the beginning of the digital revolution. Now, it’s foreseeable that the current national security culture is eroding as technologies shift and evolve. The future is looking very complex indeed.
GE Aviation's aircraft engine technology was the target of a Chinese espionage operation  accor...
GE Aviation's aircraft engine technology was the target of a Chinese espionage operation, according to the US Department of Justice
In what must surely be one of the most patient and polite articles ever published in The New York Times, NSA counsel Glenn S. Gerstell opens the blinds to the real state of the future security issues. It’s disturbing reading, made bearable by the fact that an expert who obviously has a lot of analytical experience makes the issues very clear.
Gerstell’s most basic point is that the current information culture is basically realigning and repositioning security assets outside traditional security areas of operation. That applies from who has what information to who does what with it, and this shift is putting some serious stress on governments to keep up. Security agencies are well behind the eight ball in numerous areas, and the private sector, as custodians of a vast range of assets and information, is now controlling a virtual universe of high value data.
This point isn’t merely irrefutable. It’s critical. The rise of the Big Data culture has created an endless gathering of information which does have security value, and in some cases, military value. Cyber theft, in particular, is one of the more dangerous elements, made more dangerous in that a mix of national actors and “unofficial” actors can now hack in to practically anything.
To add to the mix - New technologies are making the cyberwar a lot hotter than it was, and cyber espionage is attracting big money. There’s no lack of interest in finding and selling information around the world. That’s just the human factor.
The rise of A.I. as a security threat/asteroid strike
There’s a new class of player in security, artificial intelligence. A.I., in particular, will create a virtual new security battleground, when it’s up and running properly.
To explain – A.I. will be present as perhaps millions of evolving artificial intelligence entities. These entities are likely to be generational operators, learning as they go, adapting and diversifying. The A.I. entities will come with varying degrees of capability for extracting big data, conducting cyberattacks, sabotaging data, and generally rampaging through the digital culture and economy. It’s like adding millions of people to the security issues.
These A.I. people don’t sleep, don’t eat, and don’t have any form of ethical constraint. They’re on the job 24/7/365, and every single one of them is a possible risk to some sort of security. If you’re thinking that means a huge added load on national security on multiple levels, you’re quite right.
Realistically, to put this in perspective, current security capabilities will be made obsolete on a routine basis. They will be overstretched and thanks to technological progress, whole classes of security tech will be simply overmatched by A.I., in the most minimal threat assessment.
The security agencies aren’t crying wolf. They’re crying ‘asteroid strike”, with good reason. The incoming cyberwar will make the present look like kindergarten. The future huge increase in capacity for strikes against national assets is barely calculable.
…So what’s being done about it, you ask?
Governments are scrambling to catch up, and/or even comprehend the basics of these new threats. One of the problems, as Gerstell patiently points out, is that the United States information economy is pretty much hardwired for the private sector to take the lead. It owns the assets, stores and processes the data, does the research, etc.
The US private tech sector is much like a Mississippi flood. It can go anywhere at any time, and a lot of liquidity and assets will be involved. According to my reading of American history, telling the Mississippi where to go and how to get there is a particularly thankless task. Mark Twain convinced me that it has its own views on those subjects.
In the US, the public and private sectors are often poles apart. The security agencies, being public, are at a serious disadvantage in accessing, let alone managing, security threats across this vast range of anything and everything. Federal laws do have a role, but only to the extent they can regulate anything. Regulating a whole new range of emerging security threats, let alone actually combatting them, is hardly likely to be simple.
It’s not likely to be efficient, or quick, either. Some kickback from the private sector against security access to information is habitual. Consider that a major cyberattack can happen in a nanosecond, and a court case to make it possible to enable a security response can take years. That situation is hardly a recipe for effective security operations.
The cyber arms race, explained to a point
The cyber arms race is currently defined as:
• Development of security threat technologies and responses.
• Research, and a lot of it, across a vast, almost undefinable range of information systems.
• Accessing the skills and talent to do the research. That’s not easy, because the private sector typically offers better and more remunerative career paths than the public sector.
• A.I. operational capacity is highly nebulous. Good A.I. systems will take a while to develop. The current A.I ranges from inept to super-processors, and what it can do depends on its learning and acquisition of capabilities.
• Managing the risks created by new tech like the Internet of Things, which is likely to be the most vulnerable, incredibly stupid and unnecessary security risk facing the world in the near future. In theory, your smart fridge could be the vector for a hacker to start blazing away with nukes.
This cyber arms race comes with a few major risks of its own. Vast amounts of capital can be put in to any of these fields, with any sort of result from clumsy, highly scripted dud A.I. to highly efficient systems. Security agencies are pretty used to SNAFU as a working principle of operations, but the sheer scale and complexity of these systems, hardware and software makes SNAFU a lot more of a problem than just basic dysfunction.
Like any arms race, investment, time and resources may lead to a systems culture which creates tech which is either obsolete before it becomes operational, useless, or simply ineffectual. There are countless examples of military and security forces being well-equipped with useless assets in history, and this will be another.
What can be done, you ask, considering a new cave in which to live?
There are ways through this hideous mess. Workarounds are the default response to most of the obstacle courses created by this situation. The risks are real enough, and likely to go far beyond the current projections. That simple if ugly fact will be the driver for achievement in managing the cyberwar.
1. The private sector has a lot to lose and a lot to gain out of support and cooperation with security agencies. A simple, efficient framework is required for a same-page response to manage threats and defences. That could be a lot easier than it seems because the private sector is well aware they are a top tier target for security breaches.
2. Removal of vulnerabilities needs to be systemic. Old tech, lousy tech, and stupid tech can be removed without too much fuss. Simple replacement and threat-proofing (it is possible, by efficient access control) won’t ruffle too many feathers and makes good business sense.
3. Staying ahead of the competition in terms of A.I. is a no-brainer. The US has a substantial reach and range advantage which can make research a rather expensive, time-consuming guessing game for the opposition.
4. Security agencies must have effective/enforceable input into national and global security tech initiatives. Glaringly obvious as this may seem, the need is to ensure that security risks are managed holistically, on a facts-driven basis. The “patchwork when we get around to it” approach could be fatal, otherwise.
5. The Fourth Amendment can work with, not against, proper security measures. Rights need not be at risk. The right to privacy doesn’t have to be compromised, in fact, the simplest way to manage this issue would be to simply make it clear that Fourth Amendment rights are fully enforceable, regardless of the specific security situation. Any resultant ness can be cleaned up after the threat is neutralized. (The courts won’t mind a bit of clarity, either.)
6. Security countermeasures possibilities are practically limitless. Compulsive hacking can be an own goal, and so it should be. Take it from there.
If we’re to have global cybersecurity, and wind back this obscene situation in to a more rational and less rabid environment, some heavy hitting needs to be done. A.I. is not omnipotent. It can be deceived, it can be misled, it can be task-specific to the point it can’t do much else. Look for the weak points, because they are many.
Meanwhile, just make damn sure your security agencies have the scope and range to do what needs to be done. The alternative is truly horrific.