Email
Password
Remember meForgot password?
    Log in with Twitter

article imageOp-Ed: Algorithms vs. free will? Market math and no free will propaganda

By Paul Wallis     Apr 14, 2013 in Internet
Sydney - The anti-free will nuts are at it again. The “new” theory is that all the software that inflicts you with associated links based on your browsing proves that your views are shaped by algorithms. God is a tracking cookie, apparently.
Quite aside from the pathos of people trying to prove themselves right with self-supporting logic, this is the story about the algorithms as described in the Sydney Morning Herald in an article called “Distorted world view: how computers are doing our thinking for us”:
These algorithms, written into application source code by their designers, start by looking at ''signals'' - such as location, past click behaviour and search history - before deciding how to present information they calculate we will want to consume.
In some cases - such as Facebook's ever-changing news feed - only the content it sees as most engaging to us from brands and friends is shown in its top stories feed, excluding most of the other data people produce.
Citing somebody called Eli Pariser, who wrote a book on internet filter bubbles saying that how we receive filtered information affects our views.
Pariser wrote that when algorithms did all the legwork for them, people were less often exposed to conflicting viewpoints and become intellectually trapped in their own information bubble.
Using an example of search filtering, two people searched “Egypt”. One got travel sites, the other got news about the uprising.
The article continues:
''What it turned out was going on was that Facebook was looking at which links I clicked on, and it was noticing that, actually, I was clicking more on my liberal friends' links than on my conservative friends' links,'' he says. ''And without consulting me about it, it had edited them out.''
In other words, useless filtering produces useless results and in this case removes about half of the potential for getting useful information. There’s a surprise.
Now apply this principle as a too-easy way of devaluing people’s viewpoints. Any hack could use this methodology as a way of attacking the validity of any viewpoint, simply by denigrating your source of information.
Algorithms, no free will and lousy logic
This plugs in nicely to the “no free will” garbage. The anti-free will psychosis starts with the assumption of inferior knowledge. It instantly devalues any suggestion that people have free will, thereby devaluing their viewpoints. The no free will cult is now an almost mystic force online, with strangely smug professorial nobodies getting spiritual about the absence of free will, predeterminism, and the rest of the psycho-misanthropic travel brochure being “built in to the fabric of the universe”.
(I refuse to post even a link to this sophist tripe. Just search “free will” online and you’ll find tons of it, particularly on YouTube. This is an article I did on free will a while back.)
So now, purely coincidentally of course, there are algorithms which can instantly devalue not just your viewpoint, but the actual quality of information you receive? George Orwell wouldn’t have been surprised, either. This is a typical doublethink psychological stratagem, if you’re a really crappy, ideas-free, six-legged psychologist.
It works on uneducated people and the sort of credulous people who “believe” whatever they’re told everyone else believes. It assumes disbelief is suspended, which it rarely is. It doesn’t work on freethinkers or those with a basic understanding of logic.
Consider for a moment the quaint, womb-like idea of no free will. Now consider the fact that people saying there is no such thing as free will are presumably saying so because they themselves have no free will. The mere fact that they have no free will, by their own argument, devalues/invalidates their argument.
If they lack free will, they can’t argue otherwise. (As if the banal, infantile content of these “ideas” already hadn’t proven this quite adequately.) A person with no free will can’t, by definition, assess alternatives to their own position. They start from a premise which won’t allow it.
Now consider where the no free will argument leads. Absolutely nowhere. If everything is predetermined, there is no randomness. Even mistakes are predetermined. Da Vinci was predetermined. Van Gogh was predetermined. Shakespeare was predetermined. There was a mystic script for all their works.
Sure, there was. That’s why that standard of art and literature is so common. Everybody is Da Vinci. Everybody is Van Gogh. Everybody is Shakespeare. The entire human race is just one big misunderstood, pre-programmed genius with no ideas.
Algorithms can do all this? They can reduce human thinking to mere demographic categories? No, they damn well can’t. Try finding a person on Earth who isn’t irritated by irrelevant search results. The free will response to most searches is annoyance, and the response to all those subtle ads is distrust.
Can people without free will get annoyed? Is distrust predetermined by presumably predetermined algorithms?
The old description of the no free will argument was “fatalism” aka the will of God, destiny, etc. Fatalism had one fatal flaw, also presumably pre-programmed- Like its idiot and equally lazy, useless cousin, nihilism, it assumes an inferior status of human existence. The human is subject to forces which it can’t control. The tendency of humans to avoid situations like that is scrupulously ignored. Apparently there’s also no survival instinct, either. The mere fact that humans alter their environments at will is ongoing proof of the ability to manipulate fate, but why should millions of years of history get in the way of lucrative bullshit?
In other words, no free will is rote religion, by stealth, peddled by “scientists” who apparently don’t even check their own views for bubble-like insularity. The argument that algorithms dictate human life is like saying your toaster is sending you subliminal messages.
Yes, there’s a problem with algorithms, but it’s a bit more basic than predetermined dreck on your screen. The problem is lousy search results. All this effort goes into producing results which are barely if at all related to the things you were actually searching for, and this is a global bubble, withholding information?
As a matter of fact, useless search results are the result of lazy search engines as much as any drab little exercise in filtering. The irony is that in nearly 20 years of searching for many different subjects, I’ve noticed that the overall quality of searches has gone down, badly. All this tinkering has simply blurred the results.
The best quality searches were and continue to be Boolean. They’re specific. The irony is that the search engines, no doubt fully aware of this fact, are prepared to give you any number of million results when you could in fact find what you want with a few hundred and related links. Good use of resources, or clogging up the internet with useless/never to be used data?
Algorithms be damned. Inferior quality search results, more like, are based on a crappy market psychology. This psychology, which apparently assumes that a passing interest or mention of something is a natural driving priority in human thinking, apparently doesn’t go so far as to assume any other elements in that thinking.
You see a picture. You click on the picture. Then you get deluged with results related to this picture, which has been upgraded to an obsession by a mathematical formula? Drivel.
Even the functionality of searches is subject to this argument. I’m an Australian. If I hit Google News, I get Australian results at the top. This is quite regardless of the fact that I can access all the information I need in Australia and have usually already read most of the news locally.
The search version of this lousy algorithmic argument also overlooks other elements in the equation- Tracking cookies, plagues of them, which literally shape your online environment and add their tacky irrelevance to your life. These cookies and the information they provide, by definition, can change the profile of the user. User preferences further modify the filter process.
Therefore, by this logic if you click on a Victoria’s Secret picture, you’re obsessed with women’s clothing and news about celebrity supermodels. A stray ad with a picture of a pretty girl means you want to go to Thailand. You’re bombarded with images of Thailand, based on strict algorithmic protocols, utterly uselessly.
No free will, eh? Everything about user behaviour is predictable, you say? Doesn’t sound much like our troll-ridden, malware-saturated, online nuthouse to me. All sites and viewpoints are dictated by algorithms, are they? Mine, which is saturated with my perspectives, isn’t. I don’t use or need tracking cookies. The people I want to reach wouldn’t appreciate yet another cookie in their cache, either.
The only place filters really work well is on shopping sites, where the filters are based on indexes. Hit a product, you get choices. The site remembers your interests and stores them, which saves you doing more searching. That makes sense, doesn’t it?
Just one more thing, idiots- Who decides what product you choose? An algorithm or your own value-based search parameters? Ever heard of SEO? If you’re going to start talking about dictatorship of viewpoints by algorithms, at least find out what you’re talking about. As for the no free will morons, go to hell. You’re late for your predetermined fate.
This opinion article was written by an independent writer. The opinions and views expressed herein are those of the author and are not necessarily intended to reflect those of DigitalJournal.com
More about internet behaviour, antifree will, algorithms, tracking cookies, irrelevant search results
 
Latest News
Top News