Op-Ed: 'Sentiment analysis' enters the deadly maze of psych technologies

Posted Sep 17, 2020 by Paul Wallis
Algorithms are mathematical formulae which can now be used to interpret people’s moods from what they write on social media. If you’re thinking these algorithms must have had a rough ride, you’re right. It's about to get a lot rougher.
Brain’s white matter obtained from diffusion tensor imaging. Photograph: Institute of Psychiatry
Brain’s white matter obtained from diffusion tensor imaging. Photograph: Institute of Psychiatry
Institute of Psychiatry
The latest tech is the Hedonometer, which measures happiness or sadness. Imagine dredging through 50 million Tweets to create a reliable diagnostic tool. That’s what the Hedonometer at the University of Vermont does.
It’s called “sentiment analysis”, and it’s one of neuroscience’s new, and highly debatable, contributions to psychology. In conjunction with the development of voice print analysis to assess the mood of phone callers, this type of algorithm is likely to be extremely controversial, and possibly risky, in future.
Sentiment analysis
With sentiment analysis, your word associations are formulated as indicators of your mood. This type of analysis could have a very short shelf life in some cases. Moods on social media swing, a lot, and often. An annoyed person will use different phraseology and terminology.
However – Diagnoses of Tweets from people diagnosed with depression did show some common elements like many negative emotions. It’s not at all unreasonable that common factors affecting people with depression would be visible in their communications.
Whether or not this constitutes an accurate analytical tool in other cases remains to be proved. People do use facile terms for all types of situations, from sales to making excuses. What you say or write is what you want people to hear or see.
This type of analysis can’t be infallible. A level of consistency would be required to positively identify an overall state of emotional distress or euphoria. A passing mood or temper tantrum could make data inconsistent.
Voiceprint mood analysis
Less impressive by a long way is the voiceprint mood analysis, which is being used as a type of risk assessment in call centres. This rather intrusive approach does have some upside. This is the phone listening to you, not the person on the other end. The analysis can pick up variations in tone, and run a real-time analysis of the state of mind of the speakers. A company called Cogito offers software called Dialog for customer service agents which does this on a commercial basis.
The software can be used in many settings, even in just knowing when people are paying attention. To their credit, the developers of Dialog acknowledge that this approach could be invasive of privacy, and is better suited to a more public, transparent, environment.
It’s not clear whether conditions like sinus, laryngitis, things like drinking coffee, or other basic qualifiers are factored in. The voiceprint software is also characterized as a “risk assessment” tool. This might mean a diagnostic warning light for a medical professional, or an alarm button for a customer service facing a tough customer.
One of the more interesting things to come out of sentiment analysis is the analysis of songs. A 2017 study by researchers Artemy Kolchinsky, Nakul Dhande, Kengjeun Park, and Yong-Yeol Ahn published in Royal Society Open Science found direct indicators of sentiments. They discovered higher levels of positive sentiment in major chords and lower in minor chords.
They also found drastic differences in sentiments for types of music from 1960s rock (very high positives) to punk and metal (very low). Lyrical sentiments in the different types of music gave similar results.
Most people consider their emotions private. Any degree of intrusion can be highly resented, and set off evasive maneuvers in subjects. This type of analysis is new and very different. It needs time for heavy-duty field testing, objective criticism.
These types of analyses are also going to push a lot of hyper-sensitive buttons. Even the motives of some types of mood analysis are very questionable. (Analysis can be for gain, not ethical reasons. The information could also have the same legal issues as any kind of personal medical data.)
The good news, such as it is, is that the analytic methods can be used to interpret employee satisfaction (used by IBM) and similar positives. Responsible employers would benefit from this data, assuming it to be reliable and pretty bulletproof as an analytical tool.
The other side of this type of analysis is the Wild West of psychology – Many American psychologists have expressed literal encyclopedias of doubt about diagnostic standards over the years. As many or more psychiatrists have expressed total disdain and healthy distrust for the “everything is a psychiatric condition that needs meds” motif in their field. This tech has barely got off the drawing board, and it does need credible standards to be set.
Note: The link above is to one of my own articles on the subject of diagnostic controversies back in 2013. Very annoyingly to me and quite probably most of the psychiatric profession, not one single issue in it has been addressed. This idiocy is exactly what sentiment analysis is walking into right now.
Why this line of research is so controversial stems from real doubts and to be fair a commendable requirement for stringency on the part of researchers. Whether or not everyone will be so ethical, however, is more than debatable – It underpins the value of the technology and the whole idea of this type of analysis.
The bottom line for sentiment analysis is looking murderously immovable - Can an algorithm really conduct a 100% accurate diagnosis? Can an El Cheapo version of the same technology be trusted? Can someone be placed at risk by an inaccurate or sloppy diagnosis? These questions can’t be avoided for long.
Laws and analyses
It’s a very old California joke that you need more qualifications to be a hairdresser than a psych. The problem is that some people seem to go out of their way to prove that joke right.
The risk is that the tech could backfire, badly, in multiple ways. Plagiarized or “cost-effective” software on the market could become very grim indeed. All it needs are a few idiots. A dud diagnosis could do real damage to the subject of the analysis, in fact, legally actionable damage, and a lot of it.
In medical law, negligence is proven by counterevidence from a qualified analyst. A halfwit spruiking half-baked sentiment analysis or voice analysis could sell you a major liability worth millions of dollars to the claimant(s)against you.
That, sadly, is all too likely. Most tech comes out in its original form and is then devalued by cheap copies or worse. Like stem cells, the discovery is important, but the applications could be anything.
Privacy and degrees of intrusion
I definitely don’t want to start valiantly waving the privacy flag at this point. It’s far too early in the game for this tech. There aren’t enough clearly defined real-world situations to evaluate it. Any number of scenarios could arise where privacy is either what you want to be private, or something another party legitimately needs to know.
For example – In business negotiations, a lot of money may be involved. The use of sentiment analysis from recordings of negotiations could be grounds for legal disputes. The claimant could say they were disadvantaged in negotiations by its use. That’s not so far-fetched. Psychology is very much part of negotiations, particularly tough negotiations.
Whether or not a court believes a word of it is another thing, but you can see where things like divorces, workplace disputes, etc. could also be part of this mix. This is tricky, and it’s going to get a lot trickier.
I’ll suspend judgment on the tech pending some more solid evidence for or against. What I will say is that this tech needs to be very sure of its ground in practical and legal terms, for everyone’s sake.