Connect with us

Hi, what are you looking for?

Tech & Science

Op-Ed: AI learning to predict conversations and ‘moral’ violence

Why, exactly, would anything intelligent want to know when a conversation is heading to the bottom? Well, do people really want to know? It’s a social thing, but the big deal here is that it’s also a pretty advanced form of learning.
This type of learning involves:
1. Conversation information inputs.
2. Contexts and situational references.
3. A wide range of subject matter to be understood and interpreted.
4. Use of language in multiple forms.
5. Obscure human references.
6. Understanding the different parties in a conversation.
7. Recognizing the flash points, notably the initial flashpoints.
The AI is getting pretty good. It’s up to 61%, compared to 72% average human prediction of a meltdown in a conversation. For a science that’s barely out of the pure scientific speculation bassinet, that’s a very good effort.
The result is even more important – The ability to predict. That’s a big leap. Prediction of conversations can be quite difficult. If you know the people, it’s a bit easier, but if you’re a machine intelligence which still thinks humans know what they’re talking about, it’s a big challenge.
Consider – There you are, a binary baby, processing the timeless wisdom of online babble, rants, and various explosions of stupidity. You can predict where it’s going? Lucky artificially intelligent you.
The Latest Research
Cornell research shows that there are solid predictors. The use of the word “you”, for example, is a sort of preposition to a decaying conversation. Sounds right, doesn’t it? Hard to have an argument if you don’t tell the other guy it’s “you” they’re arguing with.
Meanwhile, USC have come up with a parallel line of research which refers to deep learning studies of the risks of violence in online posts. Deep learning is the method AI uses to learn, and there are a few obvious dots to join in context with the Cornell research.
Here are a few of the dots:
1. “Moralized” language usage – The “endorsement” of extreme positions by groups.
2. “Echo chambers” – The amplification, affirmation and reaffirmation of whatever load of moral crap is involved.
3. The “we are many” syndrome – When people feel supported by a large group, they’re more heroic.
Add to this the conversational decay markers from the Cornell studies, and you get one truly lousy picture of humanity.
How familiar is this – Conversation, escalation, and bingo, you have a bonny bouncing society of fools who really believe they have the right to trample all over others.
In kids, this can be cured by a thoughtful whack on the backside. Online, it’s an industry in its own right. Any collection of boring failed roadkills can be “extremists”, babbling about things they know less than nothing about. They can advocate killing others, and the rest of the sheep agree, courtesy of the “we are many” syndrome. What a picture of humanity.
The pity of it is that this is what AI and deep learning is studying – A repulsive, useless set of phenomena which shouldn’t happen at all. Worse, arguably, is that this very useful, and by the way, perhaps first real systematic study is necessary. Poor AI; just born, and the area of study is human idiocy.
Let’s hope it doesn’t get bored.

Avatar photo
Written By

Editor-at-Large based in Sydney, Australia.

You may also like:

Tech & Science

The arrival of ChatGPT sent shockwaves through the journalism industry - Copyright AFP/File JULIEN DE ROSAAnne Pascale ReboulThe rise of artificial intelligence has forced...

World

Taiwan's eastern Hualien region was also the epicentre of a magnitude-7.4 quake in April 3, which caused landslides around the mountainous region - Copyright...

World

A Belgian man proved that he has auto-brewery syndrome (ABS), which causes carbohydrates in his stomach to be fermented, increasing ethanol levels in his...

Tech & Science

Middle-earth Enterprises & Friends will manage the intellectual property rights Embracer has for "The Lord of the Rings" and the "Tomb Raider" games -...