Remember meForgot password?
    Log in with Twitter

article imageOp-Ed: A.I. writes Yelp! reviews, complete with OMGs, etc.

By Paul Wallis     Sep 2, 2017 in Technology
Chicago - Artificial intelligence (A.I.) writing has come a long way in the last few years, from “impossible” to industry standard (not much better) and now, useful Yelp! reviews. The reviews certainly do look authentic.
The University of Chicago, in a burst of what some might see as cynicism, or realism, decided to find out whether A.I. could write fake reviews. It could, and it could do it well.
The type of A.I. used is what’s called a neural network. Expect to be saturated with that expression soon enough. A neural network is a computer modelled on the human brain, capable of “deep learning” (massive number crunching combined with learning capabilities). This is the new A.I. generation, and it’s achieving quite a lot of useful things. There’s a gigantic array of writing software on the market, and it’s obviously growing up, fast.
The reviews written by the University of Chicago’s A.I. were pretty much standard restaurant reviews, short, and structured. Many readers found the reviews useful enough to rate them as useful, another telling blow for A.I..
The trick to look real was simple enough. OMGs, oohs and aahs were added strategically in what is basically a structured narrative.
If you analyse the reviews written by the A.I., you can see how it works:
"My family and I (Who’s reviewing) are huge fans of this place (subject reference). The staff (scene setting) is super nice (scene setting )and the food is great (unqualified positive descriptor). The chicken (subject) is very good (value) and the garlic sauce (subject) is perfect (unqualified positive descriptor). Ice cream (subject) topped with fruit (descriptor) is delicious (unqualified positive descriptor) too. Highly recommended! (value) "
This is actually quite good, authentic baseline structuring of both sentences and subject matter. It’s a very nice bit of organization, despite seeming simple. It’s designed to mimic a quasi-literate reviewer, and is actually better than some I’ve seen.
It’s not the genuine article in terms of the restaurant review cycle:
1. A genuinely enthusiastic foodie would write a sort of novel on the subject of the setting, followed by painstaking reviews of each dish, and a highly targeted end summation.
2. A bored food journalist would use a macro/template using previous articles doing a census of good and bad, and possibly wake up for the summation. Enough effort would be put in to making the review unique to avoid the ire of editors and readers.
In fairness to A.I. writing, I must say that what I’ve seen, including sports news and other types of “conveyor belt” writing, is pretty adequate on all the basic content needs. It’s not bad writing at all, and it’s never glaringly lacking in obvious content needs.
The new effort by University of Chicago is designed to deliver what A.I. writing typically lacks, which is color. A.I. writing in the past wasn’t so much dull as undistinguished. A few superlatives go a long way, and words like “delicious” trigger psychological reactions.
Add to this the fact that a good review is a good review, and a 24/7 reviewer is born… Maybe.
I’m a pro writer with over 10 million words online, all paid for by other people. I am NOT going to criticize A.I. writing simply because it’s A.I. writing. That’s not only banal, but largely beside the point in terms of practical writing. It’s also unfair, because the A.I. writing I’ve seen is OK by most reasonable standards.
Writing is functional, as well as whatever else it may be. What is written is written for a reason, and the idea is to communicate information. That’s the bottom line, and it’s the must-do part of writing. A.I. does it well enough to pass for reportage.
A.I. has shown that it can do the basic content writing well, establishing subject, information and even commentary pretty effectively. It IS commercially viable, in terms of core information processing and handling. I’d go so far as to say that it can obviously correlate a lot of relevant information, more than human writers would normally use.
The problem I see is appealing to human motivation and extended logic. Human writers use blue sky references and extended logic more than they realize. This is “engagement” in the most literal sense, and it’s bread and butter for everyone who publishes anything.
For example, this is how I’d write that review:
"We were a large party of foodies. (Who’s reviewing) We were curious about this place (subject reference) because of the eclectic menus and interesting takes on standard dishes. The staff at this very When Harry Met Sally restaurant (scene setting) were very professional, and very busy despite being also very friendly, rather than the other way round.( qualified, humanized, scene setting ). The food was beautifully presented (unqualified positive descriptor) and excellent. The chicken in particular(subject) is very tasty and lively (value) and the garlic sauce (subject) was a dream of tang and vivacity. (unqualified positive descriptor). The dessert included a truly fabulous, ice cream (subject) dazzlingly bombarded with fruit (descriptor) and was a real triumph of good taste, if you’ll excuse the expression. (unqualified positive descriptor). A Must-Eat-Here restaurant, not to be missed! (value)"
Note the hype, but note also the references to things which aren’t food and aren’t restaurants. This is part of what I do for a living, and I can tell you for a fact that anything that says no more than “Restaurant Good” isn’t top of the line hard sell. It’s OK, but not Ferrari standard.
The A.I. review would appeal to local diners because it’s a good review about a local restaurant. My version targets foodies, a specific market segment, and is designed to get the high-end market.
I’m certainly not going to criticize the A.I. for getting the content structure right, either, which many human writers just don’t. Keep it simple, keep it moving and get to your summation is exactly what’s required.
No, I'm not going to get in to the ethics of fake reviews. When human writers write paid fake reviews, why accuse A.I. of lacking ethics? My advice to writers would be don’t assume that A.I. can’t do 90% of your basic job. It can do the drudge stuff, and do it well. It can add commentary, authentic or fake, and bring it off to LOOK like a real review or whatever.
I think a healthier mix of human and A.I. writing would be let the A.I. do the basics, and then bring in the human writer to prioritise, extrapolate, and develop themes. This would save writers from a lot of drab little exercises in “The cat sat on the mat” level writing, and allow them to get on with real writing.
This opinion article was written by an independent writer. The opinions and views expressed herein are those of the author and are not necessarily intended to reflect those of
More about Artificial intelligence, fake reviews, Writing software, University of Chicago
More news from
Latest News
Top News