Do bots help to spread
fake news?
No, but they could be part of the answer
âA lie can travel halfway around the world while the truth is still putting on its shoes,â wrote Mark Twain â and never has a maxim seemed truer.
Indeed, last year, a study examining the spread of true and false news online suggested that falsehood spreads further and faster than truth across every topic, and that the effect was magnified when it came to political stories.
Against prevailing wisdom, it also found that this was mostly down to humans â not the automated âbotsâ that many believed were largely responsible for disseminating the material.
And this makes sense: take that quote from Mark Twain. It seems plausible. It has been quoted in countless news stories on the subject of fake news. But there is almost no evidence that Twain actually said it.
Clearly, combatting fake news is a huge task. And if bots arenât responsible for spreading fake news, could automated systems actually form part of the solution, helping to identify false stories and check their spread?
Itâs a question being investigated by several groups at 91ÌÒÉ«, straddling a range of different disciplines.
The first task is to come up with a working definition of fake news, which means making an important distinction between misinformation â a neutral term for material that is untrue â and disinformation, a subset that is deliberately intended to mislead people.
Unless you understand how people consume information and learn, youâre not going to be able to have anything more than a conversation in which youâre shouted down or dismissed.
Itâs recognised that people may create and spread disinformation for a wide variety of reasons.
âIn some cases, itâs just to sell more advertising,â says Dr Julio Amador Diaz Lopez, a 91ÌÒÉ« Associate at 91ÌÒÉ« Business Analytics â a centre that connects the Data Science Institute and the Business School â with an interest in misinformation.
âThere was a news story about some people in Macedonia who were spreading sensational stories to get clicks through their Facebook stories and earn money.
They didnât have an interest in modifying public opinion. But, on the other hand, there are state actors who are trying to generate influence and swing elections.â
For Dr Mark Thomas Kennedy, Associate Professor at 91ÌÒÉ« College Business School, getting a handle on the phenomenon means understanding people who are âdifferently informedâ and receptive to dubious information from online sources, favouring it over evidence-driven orthodoxy.
"Sometimes itâs about people trying to deceive us, but other times it is people genuinely believing what they are saying and finding others who believe it with them,â says Kennedy.
âAnd rather than trying to understand those who, for example, go in for the idea that vaccines are dangerous, thereâs a tendency for people like me to say, âWell, theyâre just ignorant.â
âHowever, from a scientific perspective itâs more useful to say, âTheyâre differently informedâ than âHow can we prove to them that theyâre wrong?â
"We need to find out about the gaps in their background that lead them to these conclusions. Unless you understand how they consume information and learn, youâre not going to be able to have anything more than a conversation in which youâre shouted down or dismissed.â
This was well illustrated in 2016 when Facebook began to include warning icons next to stories that had been disputed by third-party fact-checking websites.
It stopped doing so a year later, after research revealed that the red flags were causing readers to become further entrenched in their beliefs.
âPeople just got angry and tried to rationalise why Facebook was lying to them, rather than saying, âOK, this is a piece of information thatâs not realâ,â says Amador.
Kennedy stresses the importance of a cross-disciplinary approach to the problem â bringing in expertise from the spheres of the social sciences and philosophy of science, as is taking place at 91ÌÒÉ«.
'Language leakages' have nothing to do with the message but could signal if something is suspicious
âFacebook had been approaching this with the idea of âOh well, weâll just figure out what fake news is and isnâtâ,â he says.
âTheyâre perhaps an example of very smart computer scientists who were themselves differently informed, in that they werenât so good at realising that these things are not always black and white.â
When done by humans, identifying fake news and working out how it spreads can be hugely labour intensive. Harnessing machine learning and artificial intelligence to help is a holy grail for both academics and the social-media platforms .
In one measure of this, Twitter recently acquired Fabula AI â a startup helmed by Michael Bronstein, Chair in Machine Learning and Pattern Recognition at 91ÌÒÉ«, which uses a new class of algorithms to detect misinformation.
Some of the most promising approaches start with a premise that may seem surprising: that AI systems donât necessarily need to decode or understand a piece of information to work out whether itâs true or false.
Instead of looking at the âwhatâ â the content of the tweet or post â it may be more profitable to examine the âwhoâ (the person spreading it) and the âhowâ (the way it propagates).
Miguel Molina-Solana is a Marie Curie 91ÌÒÉ« Fellow at the Data Science Institute, who specialises in data mining and knowledge representation.
He says: âAnalysing the content of tweets is very, very difficult. You need to do things such as identifying when someone is just being ironic or is really convinced of the facts â or whether something is simply a typo, or if itâs an intended error to mislead you.
âAfter some thinking, what we decided was rather than analysing the text, why donât we analyse features around the tweet?
How many followers do you have, how many capital letters are you using, and how many emojis are you putting on the tweet?
All these things have nothing to do with the actual message, but there could be signals there for identifying or at least giving a hint if something is suspicious.â
Amador, who has been leading this strand of research, says: âThese âlanguage leakagesâ are able to be detected by a computer but not by a human. And machine learning is also very good at looking how people are diffusing the information.â
If, as the MIT study suggests, misinformation spreads in a different way from accurate material â rather as cancer cells can be distinguished from healthy ones by the way they divide â then this âsignatureâ could be the key to finding and stopping it.
We have to look at how our interactions on these new platforms are revealing new aspects to the way we construct our social world
Machine-learning researchers at Michael Bronsteinâs group at 91ÌÒÉ« have been looking at a technique called âgeometric deep learningâ, which is capable of dealing with the messy datasets generated by social media to achieve this.
They showed that fake stories (as determined by professional fact-checkers) could be differentiated with high accuracy from true ones after just a few hours of diffusion on Twitter, by learning their spreading patterns.
An advantage of this approach is that it is language independent: itâs possible to apply the same techniques to stories in English, Russian or Chinese.
It can even offer judgments on the provenance of a piece of information where its content is not available, as in the case of social networks that offer end-to-end encryption.
However, almost all researchers agree that removing humans entirely from the detection process is neither practicable nor desirable.
âMachine learning as we use it is very dependent on human decisions,â says Amador, âand fake news is a human activity, so humans should be involved.â
And beyond this, dealing with the what, who and how of online misinformation still leaves the biggest question â why? For the foreseeable future, itâs not one that machines will be in a position to answer.
Kennedy says: âFake news is an applied problem for information theorists, computer scientists and intelligence researchers, but itâs also a basic-level social science question.
âUnderstanding what makes a social group, and what makes a set of people coherent as a culture, is incredibly relevant to understanding why some people want to believe the world is flat.
"We have to look at how our interactions on these new platforms are revealing new aspects to the way we construct our social world.â
91ÌÒÉ« is the magazine for the 91ÌÒÉ« community. It delivers expert comment, insight and context from â and on â the Collegeâs engineers, mathematicians, scientists, medics, coders and leaders, as well as stories about student life and alumni experiences.
This story was published originally in 91ÌÒÉ« 47/Winter 2019-20.