Email
Password
Remember meForgot password?
    Log in with Twitter

article imageOp-Ed: AI learning to predict conversations and ‘moral’ violence

By Paul Wallis     Jun 1, 2018 in Technology
Sydney - AI, the ancient holy grail of thinkers and the brand new buzzword of morons, is going from strength to strength. The new thing is AI learning to predict when a conversation is likely to get nasty or when violence is a real risk.
Why, exactly, would anything intelligent want to know when a conversation is heading to the bottom? Well, do people really want to know? It’s a social thing, but the big deal here is that it’s also a pretty advanced form of learning.
This type of learning involves:
1. Conversation information inputs.
2. Contexts and situational references.
3. A wide range of subject matter to be understood and interpreted.
4. Use of language in multiple forms.
5. Obscure human references.
6. Understanding the different parties in a conversation.
7. Recognizing the flash points, notably the initial flashpoints.
The AI is getting pretty good. It’s up to 61%, compared to 72% average human prediction of a meltdown in a conversation. For a science that’s barely out of the pure scientific speculation bassinet, that’s a very good effort.
The result is even more important – The ability to predict. That’s a big leap. Prediction of conversations can be quite difficult. If you know the people, it’s a bit easier, but if you’re a machine intelligence which still thinks humans know what they’re talking about, it’s a big challenge.
Consider – There you are, a binary baby, processing the timeless wisdom of online babble, rants, and various explosions of stupidity. You can predict where it’s going? Lucky artificially intelligent you.
The Latest Research
Cornell research shows that there are solid predictors. The use of the word “you”, for example, is a sort of preposition to a decaying conversation. Sounds right, doesn’t it? Hard to have an argument if you don’t tell the other guy it’s “you” they’re arguing with.
Meanwhile, USC have come up with a parallel line of research which refers to deep learning studies of the risks of violence in online posts. Deep learning is the method AI uses to learn, and there are a few obvious dots to join in context with the Cornell research.
Here are a few of the dots:
1. “Moralized” language usage – The “endorsement” of extreme positions by groups.
2. “Echo chambers” – The amplification, affirmation and reaffirmation of whatever load of moral crap is involved.
3. The “we are many” syndrome – When people feel supported by a large group, they’re more heroic.
Add to this the conversational decay markers from the Cornell studies, and you get one truly lousy picture of humanity.
How familiar is this – Conversation, escalation, and bingo, you have a bonny bouncing society of fools who really believe they have the right to trample all over others.
In kids, this can be cured by a thoughtful whack on the backside. Online, it’s an industry in its own right. Any collection of boring failed roadkills can be “extremists”, babbling about things they know less than nothing about. They can advocate killing others, and the rest of the sheep agree, courtesy of the “we are many” syndrome. What a picture of humanity.
The pity of it is that this is what AI and deep learning is studying – A repulsive, useless set of phenomena which shouldn’t happen at all. Worse, arguably, is that this very useful, and by the way, perhaps first real systematic study is necessary. Poor AI; just born, and the area of study is human idiocy.
Let’s hope it doesn’t get bored.
This opinion article was written by an independent writer. The opinions and views expressed herein are those of the author and are not necessarily intended to reflect those of DigitalJournal.com
More about Artificial intelligence, Cornell, Usc, AI prediction learning
More news from