Email
Password
Remember meForgot password?
    Log in with Twitter

article imageOp-Ed: Break up the tech giants to protect privacy? Maybe not

By Paul Wallis     May 18, 2018 in Internet
Sydney - An NYU professor called Scott Galloway has called for the big tech monsters to be broken up to deal with privacy issues. The problem is that this whole environment is a moving target, and the big companies are targets themselves.
Galloway doesn’t mince many words. He doesn’t expect the big tech companies to regulate themselves. He wants leaders who can: "...rein in the tech giants with regulation and, possibly, by using antitrust laws.”
So far so good. Read Galloway's full commentary on the link. He also points out that unlike leaders like Trump and Putin, the corporations will be part of human reality for a long time. This longer view is at least a departure from the usual knee jerk responses to swarms of tech issues, bots, hacking, Cambridge Analytica, Wikileaks, etc., etc. and frenzy of ineffective bleating we’re used to from the “thought leaders”.
Galloway has raised the issue and got people to listen, to his credit. The question is how the hell anything is supposed to be able to happen. He’s quite right, but this is a far from simple picture. Bring a tent and read on.
The Problems
These obvious problems, however, are likely to change drastically in the long term. They could get a lot worse, and fast. The advent of AI, mega-media, and a very much changed social and economic landscape need to be factored in. The law has been absolutely pitiful in keeping up with even the current level of tech.
The core issues are:
• Privacy and personal security
• Financial security
• Operating systems which run the planet
• All communications networks of all kinds
• Government systems
• Military systems
• Health systems
Corporations are going to be on the receiving end of all problems, as much as delivering these systems. Their long view is likely to be clouded somewhat by a large amount of stuff hitting the fan all the time.
The Legal Issues for “Leadership”
There’s no real indication that the situation is likely to change in the current leadership dynamic. Particularly with the United States almost totally dysfunctional in terms of leadership and international credibility, and anti-science to boot.
Nobody is likely to believe, for example that:
1. The US will suddenly adopt a rational, viable and sustainable posture in regard to online risks of any kind.
2. China and the Russian Federation will suddenly stop exploiting the world’s extremely vulnerable online media and their chronic round the clock espionage practices.
3. Online privacy laws can get through the supremely constipated US Congress, where a useful global precedent for online regulation might otherwise be set.
4. Organized crime is going to suddenly respect people’s privacy or anything else. The organized crime input is crucial, because they’re the enablers of a lot of the actual attacks and hacks. In a country like the US, where organized crime is a virtual sacred cow, not much can be expected except a few minor busts, no real hits on the billion ton gorilla itself.
5. The ridiculous, (and in my opinion from experience with some smug secure bastard in France, insane), online security industry. This mega-rich, ultra-ineffectual moron festival of an industry has been and will be no use at all. Under their babbling, perverse, guidance internet and other crime have simply grown exponentially, untroubled.
It is impossible to believe that the required basic comprehension, let alone the political will, is there at any level to even come up with a rational response to these issues. Governments have been screwing up internet regulation for decades. They have a track record of 100% failure. Even the theory of regulation has so far barely scratched existing problems, let alone the mega-blast of future issues already in motion.
The Tech Problems to Come
OK, inspired enough so far?
The next level of BASIC tech to be regulated will include:
1. Multiple AI entities operating damn near anything and everything. These entities could be huge, bigger than whole server networks, like AI super colonies of ants. Just think – Every byte a risk.
2. The idiotic, shambolic, truly 200% half-ass Internet of Things, the all-time obvious risk creation mechanism for the future. How to create global vulnerabilities without really trying.
3. Private AIs in their millions which can do whatever they’ve been taught to do. Imagine a criminal AI or hundreds of thousands of them. Sound like fun? A few trillion bots/hacks/DOS attacks a second? It’s doable, and it will be done.
That’s just a few of the more obvious risks. The fine detail could include billions of regulatory issues in context with Galloway’s ideas.
Into the Valley of Dumb Go the Big Techs
Consider for a second or two the position of the tech giants in this mess:
Google, Microsoft, Amazon, Apple, Facebook, etc. are right in the middle of both the new tech and the problems. The hardware guys are in very much the same general position from their end. The need is to develop; the next need will be to fix whatever these new technologies cause in terms of problems.
The corporations are in an unenviable position. They’re developing the tech themselves. Check out Google AI and some recent developments in military AI to see what’s already making impacts. That, sadly, doesn’t mean they or anyone else can predict vulnerabilities and/or prevent them.
The positives of the big tech companies are:
Centralized control applied across entire networks consistently. (OK, whether these controls work or not is the other issue. The point is that they CAN respond to their own issues pretty effectively.)
Big capital to develop better, safer, more manageable systems. That’s not to be sneered at too much. Even allowing for the occasional Developers’ Fantasylands, extravagance and misguided developments, the money has to be there to grow the working systems.
A degree of interaction among the market leaders. Unavoidable, but also good for consistent security, privacy, etc. if they do it themselves. The collective practical response can outmuscle and out-finance the nutcases, developing better systems and safeguards.
The self defense mechanisms of big tech companies. The Cambridge Analytica horror story may have had at least one positive effect. It’s ludicrous to think that Facebook was naturally prepared for such a truly oblique form of abuse. That said, the response so far has been pretty fast, and is trying to do a good job of shoring up the bulkheads against future exploitation – To the extent it can. That may be a lot better than nothing, but the imponderables remain.
Proper attention to legal risks. It’s an irony that the last people to understand legal risks seem to be lawmakers. The corporations do understand, and the expected levels of legal ferocity in future will have them planning very thoroughly for those risks. It’s likely that all the big techs have taken due note of the risks as applied to themselves.
With due respect to Prof. Galloway’s proposal, I don’t see how smaller, more vulnerable and less capital-heavy corporations could fight the incoming tide of future issues. They can’t be strong enough in the markets, and they don’t have the “Do it our way” clout of the major leaguers.
Galloway is right that the world’s dribbling, minutiae-obsessed governments should be doing their jobs, of course. The question is whether these regulatory retards can be brought up to speed to do what needs doing. It’s not impossible. It’s just likely to be maddeningly slow, and anything but thorough, when what’s needed is thoroughness and speed.
…So start toasting the marshmallows. Could be a long vigil before much happens.
This opinion article was written by an independent writer. The opinions and views expressed herein are those of the author and are not necessarily intended to reflect those of DigitalJournal.com
More about Professor Scott Galloway, Google, Facebook, Amazon, Microsoft
More news from
Latest News
Top News