Cornell Computer is Better at Spotting Fake Hotel Reviews Than You Are
It all comes down to this concept called “truth bias.” Basically, when you read something, you generally take it as truth until you find evidence to the contrary (makes my job easier). On the flip side, if you’re told to be on the lookout for deception, you start shadowboxing like a schizophrenic and won’t believe your own mother’s story about how fluffy the pillows were. Enter the zen quietude of the robot brain.
The program that sorts through these reviews has none of the psychological problems we have, of course, and instead focuses on some really odd, but interesting facts about real reviews and fake ones. For instance, real reviews tend to use more concrete nouns, while fake ones lean heavily on verbs. The liars will also do more scene-setting and talk about “vacation” or “my business trip” while the truthful among us refer to boring real things like “the bathroom” and “lobby.” Basically, liars tend to write in a more flowery, scenic way and the truthful fellows write like Hemingway. These differences are subtle, however, and given the whole truth bias thing and how closely you have to look to get this stuff right, the computers are better at it than you ever will be.
The kicker? These algorithms, while awesome, are only validated for reviews, and to narrow the scope even further, only validated for hotel reviews. Still, there are applications to be had in first string, online review screening and a new word bank and some fine tuning could, presumably, open the program up for applications in spotting fake online reviews for other things. Just beware, when they finally rise up, the robots will be able to know if you are lying, so stop pitting your roombas against each other, because I highly doubt our robot overlords will approve.
(Chronicle Online via Reddit)
Have a tip we should know? tips@themarysue.com