The Guardian: Accusers and Purveyors of Fake News

Part One — Big Data and Fake News: How and Why Their Impact is Being Exaggerated, and Why This is Dangerous

Mansour Chow
27 min readNov 16, 2017

Over the last twelve months, if you’ve been a regular newspaper reader, particularly of The Guardian, you could easily be forgiven for thinking that Big Data and Fake News were the pivotal tools used by the Right and by Russia to decide the outcome of the EU Referendum and the US Presidential Election.

A conservative estimate from Google search results from The Guardian website for the terms ‘Fake News’ and ‘Big Data’ suggest that the overwhelming majority of articles (over 80%) containing these terms have been published in 2017.

Genuinely Fake News and the use of Big Data were not insignificant in their influence on the Leave or Trump campaigns, but their significance has been — and is being — vastly overstated by The Guardian and other ‘liberal’ mainstream outlets, such as the Washington Post and New York Times. Furthermore, Russian efforts to influence these elections, mainly through advertising on social media and the possible yielding of Twitter bots are also being vastly overstated in their significance to the results.

Although it’s highly likely one can extrapolate the arguments made in this essay to other news media and newspapers, this four part essay will mainly focus on The Guardian (and The Observer).

This essay, Part One, examines and explains:

· the dangers of exaggerating the impact of Big Data and Fake News on those recent elections

· how and why The Guardian are exaggerating the influence of Big Data and Fake News

Part Two will outline, examine and explain:

· The Guardian’s complicity in state-corporate propaganda

Part Three will outline, examine and explain

· some basic principles of journalism, and how The Guardian frequently fails to meet those principles

Finally, Part Four will outline, examine and explain:

· examples to support the claim that The Guardian is a frequent purveyor of Fake News (of course, not all the time, but so frequently as to be extremely problematic)

· what profoundly improved systems of journalism could lead to.

The dangers of exaggerating Big Data and Fake News

There is a huge danger to overstating the significance of Big Data and Fake News on the EU Referendum and US Presidential Election, because it risks allowing seemingly Liberal and Left-leaning parties to do nothing to address their fundamental policy failures — namely an embrace of neoliberalism (a system of boom and bust economics which causes increases in inequality, weak or slow economic growth, scarce upwards social mobility, increased job insecurity, upsurges in personal debt, and wage stagnation or decline for the majority of its victims) since at least the 1990s, and how that has led to a disenfranchised, disgruntled and angry or apathetic electorate.

For the Democrats in the US, they seem to be, worryingly, fighting to go straight back to the same blueprint that is evidently no longer fit for purpose. Digital explanations around Big Data, Twitter bots, Fake News and aligned Russian interference may help to mislead them into believing that they now have the silver bullet for electoral success, and can carry on with an outdated line of thinking.

“It wasn’t that our status quo policies didn’t speak to the people,” they’ll assume. “It was that we needed to use more Twitter bots or better harness Facebook likes data to tailor our messages to the electorate. It was all Putin’s fault. If we just deal with Russian meddling, it will all be okay.”

A thorough examination of the reasons behind an increased susceptibility and accessing of Fake News can be forgone, alongside taking appropriate actions to address it. Taking the place of such examination and actions are focussed efforts on pushing companies like Facebook, Google and Twitter to deal with claims of irresponsibly allowing their platforms to disseminate Fake News — in many cases with the added accusation of colluding with the Russian state to ransack democracy — with the result of limiting access to dissenting views. We are running the risk — or in fact, already facing — mass censorship through some of the most visited websites in the world filtering out, restricting or deprioritising access to search results on certain topics.

A huge reason that the Leave and Trump campaigns were successful is because they tapped into (and strengthened or weakened, depending on the person) an existing feeling or opinion that large parts of the electorate already had about specific policies or the consequences of them. I’m sure that Big Data and Fake News partly helped the respective campaigns to tap into those existing feelings and opinions, but the more important — and, in my view, overlooked — aspect of this is that people felt a long-term and increasing animosity or antipathy to Government. We should seek to understand why they felt that way and what can be done about it, rather than look for easy answers to placate the redundant and ineffective policies that significantly led to that palpable sense of disenfranchisement within the electorate in the first place.

So we should be extremely cautious of letting the Remain campaign (and who and what they represented), as well as the Clinton campaign (and who and what it represented) off the hook for their failures, otherwise they’ll never fundamentally address them. Instead, as a solution to claim power, they’ll just look to either blame or use — or perhaps a combination of both — Twitter bots, Facebook likes and social media advertisements. They will also, not in a dissimilar way to Donald Trump has done, continue to attack and restrict dissenting views by proclaiming that those views constitute Fake News, all in place of making a meaningful difference to people’s lives.

Another key danger to exaggerating these influences in news media is that other — more entrenched— matters run the risk of being deprioritised. While there has been an overt and skewered focus on election manipulation and influence linked to the Kremlin, Big Data and Fake News (with so far, in my opinion, minimal evidence to demonstrate significant influence), huge topics that comprise solutions to the rigging or unfair influencing of elections are hardly being addressed at all in The Guardian or other mainstream press, such as the largely impotent nature of the Electoral Commission and the drastic needs to reform it from a toothless organisation to one that wields the appropriate level of power one should expect when political parties or organisations are not playing to the rules. Ironically, referring to the Electoral Commission’s investigation into Tory spending in the 2015 General Election, Ed Howker and Guy Basnett in a Guardian Long Read, suggested that the Commission had actually “developed canines”.

The Chair of the Electoral Commission, Sir John Holmes, seems to disagree. In referring to the paltry £70,000 fine for Tory over-spending and non-declarations (which was, as far I have gathered, the maximum fines they could impose) he said:

“This is the third investigation we have recently concluded where the largest political parties have failed to report up to six figure sums following major elections, and have been fined as a result. There is a risk that some political parties might come to view the payment of these fines as a cost of doing business; the Commission therefore needs to be able to impose sanctions that are proportionate to the levels of spending now routinely handled by parties and campaigners.”

Carole Cadwalladr has reported in The Observer quite extensively on Big Data, Fake News, and a connect-the-dot exaggeration of their influence on the election results and the possible links to Russia. And although she discusses Electoral Reform frequently, in all her reports, there are, in reality, only minimal proclamations for improving electoral law, or, just as importantly, how breaches of the electoral law should be handled.

In several articles from April, May and June 2017, she puts forward the arguments made in a London School of Economics report, suggesting that UK electoral laws need “urgent review”. However, what is unfortunately and consistently missing in the reporting (and most reporting on the subject) is a proper summary of the study and the implications of its findings. Nowhere in the articles is there even a link to the actual report. Instead, there is simply the inference that electoral law needs changing, without explaining how that law could be changed and what positive changes would come out of it.

Significantly, readers are not given any indications of what they might be able to do to push for those changes to happen. Although admittedly in the month before she first references the LSE report (and a month before it was, indeed, published), her March 2017 article ends on almost despondent nihilism:

“Is it the case that our elections will increasingly be decided by the whims of billionaires, operating in the shadows, behind the scenes, using their fortunes to decide our fate?”

To have a stab at that rhetorical question: if we have a citizenship made to feel powerless, that is ill-informed about the issues that matter, then the answer is likely to be yes.

From the LSE Media Policy Project Brief 19

And while there is clearly a legitimate and linked concern to Big Data and Fake News about how democracy is for sale, the hysteria around linking that subject to Russian meddling doesn’t in any way sufficiently uncover the issue, especially not with any detailed systematic analysis, nor with what could or should be done about it. Rather, it removes from the limelight the generally appallingly corporate driven nature of election funding and how the most funded presidential candidate wins 91% of the time (this has probably dropped slightly as a result of Donald Trump’s victory who was actually less funded than Hilary Clinton).

Basically, the fact that democracy is generally for sale is the far bigger, deeper and more systematic problem that needs addressing. And, if addressed, would simultaneously help — in significant part — to deal with any increasing risks to democracy posed by Big Data and Fake News. An overt and skewered join-the-dots focus on Robert Mercer, Julian Assange, Nigel Farage, and Russian meddling, however, runs the risk of increasing the likelihood of simply papering over cracks as a response.

There is also the very serious danger of the exaggeration of Fake News resulting in censorship of and restriction to dissenting views (as I earlier mentioned). While newspapers argue that Fake News, Big Data, Twitter bots and Kremlin-backed online paid trolls represented a serious attack on democracy, democracy meanwhile has its hands tied behind its back from efforts to address those accusations resulting in widespread censorship and restriction from dissenting views.

In fact, Twitter recently admitted to Congress that they had already enacted exactly this sort of worrying censorship. In the example they gave, the censorship more likely worked against Trump and for Clinton during the US Presidential Election campaign, where their ‘automated' system censored access to a whopping 48% of all Tweets with the hashtag #DNCLeak, even though only 2% of Tweets with that hashtag came from “potentially Russian-linked accounts”, a definition of which Twitter provided which could broadly be described as to include anyone who has tweeted from Russia or in Russian.

The fact that this admission and its implications has been scarcely picked up in The Guardian or most other mainstream news should be as serious cause of concern for democracy — if not more — than the suspicions that Big Data, bots and Fake News derailed recent elections.

The exaggeration of Fake News and Big Data particularly in the form of hyped up allegations and insinuations of Russian election meddling is incredibly reckless because it ramps up hostility between already hostile countries. Heavily slanted and irrational allegations of Russian nefariousness in our media hold consequences. This irresponsible journalism is unnecessarily increasing tensions between nuclear-armed countries. As Professor Jeffrey Sommers writes in The Nation:

“The United States is already in a Cold War 2.0 with Russia. Continued ignorance by the press and politicians could lead to the total hot war we narrowly escaped the first time around.”

Finally, and perhaps most simply, the overstated nature of Fake News and Big Data are indicative of bad journalism. And with bad journalism comes an ill-informed and disempowered public, and a weaker society.

How and why The Guardian exaggerates the impact of Big Data and Fake News?

On the Big Data front, firstly, senior campaign staff have an interest in overplaying their incorporation of less traditional campaigning methods (like data analytics) so as their involvement and work on a campaign can appear integral to the campaign’s success. Thus, we see articles more likely to report on these claims in the way senior campaign staff frame it (e.g more favourably to their cause).

Secondly, the claims or allusions made from Big Data companies will also be a big factor. The value and demand of Big Data companies, particularly Cambridge Analytica, was expected to increase considerably as a result of people believing them to be more effective than they are/were. It’s obviously in the business interests of Cambridge Analytica to allude to or to outright say that they were the pivotal piece of the puzzle for Trump’s US election victory and may have played a big part in the EU Referendum outcome (although they might not be able to say too much legally about Brexit as Carole Cadwalladr’s Observer article suggests).

But the significance of tailoring messages to personality types is probably not as impressive or influential as has been claimed. Two academics, Michal Kosinski and Professor Jonathan Rust of Cambridge University’s Pyschometric Centre, have provided comments in a number of different articles about the techniques used by Cambridge Analytica’s Dr. Aleksandr Kogan, who was reportedly a formative and vital influence to setting up Cambridge Analytica’s system for harnessing specific data to predict personality traits and then tailor messages accordingly with the intention of influencing behaviours. Kogan, a former Cambridge University lecturer is reported to have used the similar or same techniques developed by the Psychometric Centre, and possibly to have even used the same collected data.

Yet, Kosinski, Kogan and Rust may well be more likely to talk up the significance and impact of their own research areas. Not that exaggerating the influence of the work is necessarily a conscious decision, but it’s likely there will be some biases increasing the likelihood of doing so because of the potential benefits for future prospects. It could well be good for their current projects too because they might get to mine loads more data every time they’re mentioned (as people will be increasingly intrigued about the work).

However, if The Guardian and other journalists cared to delve a little deeper into what these scientists have to say, we might have a more nuanced understanding of the potential impact of Big Data. The scientists, Kosinski and Kogan themselves, actually question the conclusions and inferences around how pivotal Big Data was for Trump’s US Election victory. Both, in discussing a Vice Motherboard article (an earlier version of which featured in Swiss magazine, Das Magazin) within Facebook group posts, attest to other articles being far more accurate assessments, including a Bloomberg article by mathematician, Cathy O’Neil, entitled “Trump’s ‘Secret Sauce’ Is Just More Ketchup”. In it, she writes:

“Clinton’s analytics team put much more money — and just as much, if not more data — into categorizing and manipulating American voters through targeted political ads. The Trump campaign reportedly focused on Facebook, tricking people into taking personality surveys and culling their likes. Compare that to Clinton’s database, which started with everything that Barack Obama’s campaign had. This would include Facebook data handed down from Obama’s famous analytics team, which employed an app — no longer available — that helped access information about Obama supporters’ friends.

In other words, it looks like Clinton’s campaign had more information, not less, than Trump’s.”

Kogan, one of the key scientists in question — within the group discussion — actually describes the ability to make accurate predictions based on Facebook likes and then tailor messages effectively afterwards as “complete bullshit”.

Kogan’s post from a Facebook Group Discussion

He goes on to say, “The take home point here is that making forecasts about personality from social media is extremely noisy; and from my testing, I don’t think it’s any better than just assuming everyone is average.”

I suspect that the reality of what can be discovered about someone through social media and how tailored the adverts and messages were to individual social media users is far less impressive and worrying than what has been alluded to in mainstream media. In fact, most of the most meaningful correlations will be quite obvious (e.g. most people who like Breitbart or Britain First will have other political affiliations or predictive factors that you can easily guess). And personality tests have always been a questionable tool to meaningfully tell us about individuals or predict behaviours anyway. So while Big Data might have been relatively useful and harnessed well, its importance for Trump’s US Election campaign is being vastly exaggerated.

As for the media’s general coverage around the use of Big Data in politics, it has largely been covered, including within The Guardian, in highly negative terms. Whilst we should be aware of Kosinki’s potential bias, he says that there are potentially positive and overlooked factors that may give cause for more optimism in the long run:

“Such microtargeting enables politicians to talk to each one of the voters about relevant and interesting issues that a given voter is most competent to judge. Algorithms enable politicians to (to some extent) understand the dreams, fears, and personalities of individuals, and communicate the relevant part of their program/agenda. It’s an equivalent of a politician visiting you at home and taking time to get to know each of the family members and then talk to them about how is she going to address their issues. Sounds like a democratic utopia, doesn’t it?

This increases political engagement. Many individuals who may have previously felt excluded from the political process, will go and vote. Yes, such voters may vote for the populists, but in the long-term a more inclusive system encouraging everyone to participate (and then to consider the responsibility for their votes) is, in my view, very healthy for the democracy.

Also, the ‘establishment’ surely took notice that those previously inactive/scattered voters are actually voting now. This will certainly encourage such politicians to work harder to listen and talk to the groups that they previously looked down at (poor, young, unemployed, etc).

Ability to split the programs into bits and talk to individuals about relevant fragments enables politicians to maintain broader programs and aim at broader segments of voters, instead of positioning themselves as a representative of a one group only (e.g. farmers/professionals). I hope that it will lower the tendency of politicians to play such groups against each other.”

An important and overlooked piece by Martin Robbins in Little Atoms — from around the same time the stories on Big Data were getting serious traction in major newspapers (including The Guardian) — helps to summarise some of the arguments pointing out that Big Data was not anywhere near as influential as has been purported to be on the US Election results because, amongst other reasons: i) Ted Cruz, who had Cambridge Analytica backing, lost primaries to candidates that had nothing of the sorts of finances or data analytical tools behind them; ii) Cambridge Analytica stated that they didn’t even use Facebook data (although this is a subject of dispute as to whether they did use it — it appears that they at least claimed to do so but now no longer make that claim); iii) Hilary Clinton’s campaign also used something similar — as mentioned earlier — to Cambridge Analytica and still lost.

As Robbins concludes his article:

“So if you step right back and look at all this, what do we see? We see a data science firm with Steve Bannon on the board, bigly claims about its powers, whose exact methodology is unclear to us. We see a candidate, Donald Trump, who used the same successful strategy right the way through his campaign whether he was employing Cambridge Analytica or a random dude with HTML skills. We have another candidate, Ted Cruz, who used the same firm and tanked. We have another candidate, Hillary Clinton, who used something very similar to Cambridge Analytica and also lost.

How exactly do you turn all that into the story of an unstoppable data science behemoth?”

I’m hopeful that covers much of the Cambridge Analytica arguments, but what about the influence of Twitter bots? There have been all sorts of inferences in the media about the significance of their use on the EU Referendum and US Election, often coupled with suggestions or allegations of Russian linked interference.

Take, Carole Cadwalladr’s article in February 2017, where she writes:

“Sam Woolley of the Oxford Internet Institute’s computational propaganda institute tells me that one third of all traffic on Twitter before the EU referendum was automated “bots” — accounts that are programmed to look like people, to act like people, and to change the conversation, to make topics trend. And they were all for Leave. Before the US election, they were five-to-one in favour of Trump — many of them Russian.”

In a Guardian article of April 2017, Paul Flynn writes :

“A significant number of Twitter users are bots that can act to spam and manipulate public opinion on current affairs.”

And:

“This cyber-manipulation is no fiction and played a role in the EU referendum and Donald Trump’s election.”

In another Guardian article, published five days later, Flynn claims:

Research from University College London explains how a large group of bots can misrepresent public opinion. “They could tweet like real users, but coordinated centrally around a specific topic. They could all post positive or negative tweets skewing metrics used by companies and researchers to track opinions on that topic.” Bots can even “orchestrate a campaign to create a fake sense of agreement among Twitter users where they mask the sponsor of the message, making it seem like it originates from the community itself”.

What is not made clear or suitably acknowledged is the current distinct lack of evidence in relation to bots abilities to actually form, change, strengthen or weaken opinions. Admittedly, the ability to suggest the existence of a consensus that does not necessarily exist in reality, and its subsequent potential to actually create consensus or change views based on the appearance of consensus, may become increasingly significant as use of Twitter increases. However, this would need a lot of further research before making any broad conclusions.

But before dwelling on proving or disproving Twitter bots’ ability to form, weaken, strengthen or change opinions, let’s think about how influential they might have been on the EU Leave or US Election campaigns for a moment by considering some simple demographics. One indicator of their influence being overstated is by considering voter demographics and twitter user demographics. What we happen to know is that, generally, older voters are:

  1. Overwhelmingly more likely to be registered to vote compared to younger voters.
  2. If registered to vote, overwhelmingly more likely to vote compared with younger registered voters.

3. Therefore, overwhelmingly more likely to vote than young people.

What do we know about the demographics around users of Twitter in the UK? Firstly, there are only approximately 20m or less Twitter users in the UK of a population of approx. 66 million and of internet users (who have used the internet within a three month period) of approximately 45 million. And of those Twitter users, it’s likely that many do not use it regularly.

We also know UK users of Twitter are vastly more likely to be young with many below 18 who cannot even vote, and the biggest percentage of overall users aged below 29.

And what do we know about younger people who are of voting age? They are:

1. Overwhelmingly less likely to be registered to vote compared to older voters.

2. If registered to vote, overwhelmingly less likely to vote compared with older registered voters.

3. Therefore, overwhelmingly less likely to vote than older people and particularly people aged 65+.

In terms of the EU Referendum, younger people of voting age were less likely to have registered to vote, and of those registered to vote, were more likely to have voted in smaller numbers. Sky Data initially suggested a turnout of those registered to vote for 36% of people aged 18–24, with the highest turnouts being those aged 55–64 with 81% turnout, and those aged 65+ with 85% turnout than older demographics. However, a further study suggested that young people turned out in much higher numbers with an estimated 64% of those registered to vote aged 18–24 actually voted. Importantly, though, the study also suggests an even higher turn out amongst the over 65s at 90%. Study findings of an increased youth turnout does not necessarily translate to a similar percentage increase in same age demographic registering to vote.

Despite the contention of bots wielding significant influencing influence, there is little evidence demonstrating bots abilities in influencing young people to vote to Leave, because young people were vastly more likely to vote to Remain (estimated at 75% of 18–24 polled intending to vote for Remain, compared to 61% of those aged 65+, which are the least likely to use Twitter and to use Twitter regularly, intending to vote Leave). The insidious influence of Twitter bots on the EU Election to cause Brexit, therefore, seems largely exaggerated.

What about the influence of Twitter bots on US Presidential Election results? Well, Pew Research Centre research from 2015 suggests a generally lower percentage of Twitter users than the UK of those that even have the internet (which is also a generally lower percentage), with only 23% of known internet users using Twitter. Again, of this figure, the highest number of users come from the youngest age bracket (with under 18s not included in research but likely to figure disproportionately highly). We also find younger people least likely to vote and be registered to vote, but, of those registered, more likely to vote Clinton.

In a more polarised and unequal society like the United Stated compared to the UK, caution should be taken before making sweeping conclusions around what we can deduct from this, but it’s highly doubtful based on the general demographic information, that Twitter bots made as great a difference as has been alluded to in The Guardian and other mainstream media.

As these technologies become more commonly used with older voters, there is perhaps more likelihood of the use of bots holding a bigger influence on elections, but for the EU vote and the US Elections, the significance of bots being used is being highly exaggerated.

Okay. So what about social media advertising linked to Russia then? Surely this was a big influence on the US Elections? Well, if you had mainly been reading American and British mainstream press, including The Guardian, then it’s easy to see why you could think that. But let’s consider the facts in a little more detail.

Former Clinton pollster and political strategist, Mark Penn, wrote a recent opinion piece for the Wall Street Journal that really drills home how insignificant the level of Russian linked advertisement funding would have been on the US Election. He writes:

“Consider the scale of American presidential elections. Hillary Clinton’s total campaign budget, including associated committees, was $1.4 billion. Mr. Trump and his allies had about $1 billion. Even a full $100,000 of Russian ads would have erased just 0.025% of Hillary’s financial advantage. In the last week of the campaign alone, Mrs. Clinton’s super PAC dumped $6 million in ads into Florida, Pennsylvania and Wisconsin.”

Facebook also admit that 56% of the ads (from June 2015 to May 2017) that they identified as Russian-linked (how they arrived at this decision or set the criteria for it has not been disclosed) actually ran after the US Election. Last time I checked, Facebook didn’t offer customers the option of using time-travelling advertisements to influence elections.

Facebook have also disclosed that approximately a quarter of the 3,000 adverts that the $100,000 (from Russian linked sources) purchased weren’t even seen by anyone. The Guardian’s headline around what was clearly a story highlighting limited evidence of Russian influence on the election, in at least these regards, was to say: ‘Facebook says up to 10m people saw ads bought by Russian agency’.

And what we don’t know from Facebook’s estimate of 10m people in the US seeing these Russian-linked ads of question, is how many were even US citizens or eligible to vote. I suspect that the real figure in terms of how many of those people eligible to vote in the US Presidential Election who saw these advertisements is significantly lower. We also don’t know what percentage or number of the estimated 10m saw at least one advertisement prior to the US Election either, which should be the most important factor in determining election interference or influence.

Taking as a crude estimate, the 44% of adverts that Facebook say were ran before election, and applying it to that 10m figure would give us an estimate of only 4.4m unique users who saw an ad prior to the US election. And, of those unique users, we don’t know the actual age demographics but can broadly assume that many will have been too young to have been able to vote in the election anyway (e.g. you only have to be thirteen years old to have a Facebook account).

Certainly Facebook themselves have said that most of the ads were not directly advocating for or against a Presidential candidate anyway. And this seems consistent with the majority of advertisements that they’ve shared with Congress that have so far been publically disclosed.

And all of this is before considering whether the advertisements were even effective in changing behaviours or choices anyway. It’s therefore unlikely that Russian linked ads on Facebook made any significant difference to US Election results.

However, the possible influence of Russian-linked social media content on the election results has also been making traction in mainstream news recently, with the claim that Russian-linked Facebook content was seen by 126 million Americans. Although The Guardian acknowledge Facebook’s claim that this works out as 0.004% of content in Facebook news feeds, they introduce and frame this as:

“Although 126 million people is equivalent to about half of Americans eligible to vote, Facebook plans to downplay the significance at the congressional hearings.”

According to The Guardian, providing context to try to avoid ludicrous anti-Russian hysteria is now akin to downplaying significance.

To help us understand these issues, particularly to better gauge the implications on the US electorate and whether it could have influenced the vote, we can add a little nuance to The Guardian’s rather unhelpful framing (nuance of the sort which could easily have been undertaken within the article in the first place). What The Guardian does not say about this disclosure, which is important, is that we do not know how many people actually paid attention to the content. Anyone who uses Facebook at least fairly regularly knows that many items are scrolled through with little attention paid to them. But we do know most of the content did not even directly advocate voting for any specific candidate.

What about the Kremlin backed news organisation Russia Today (often rebranded and more commonly known as RT)? According to many speculative articles, it would be easy to believe they had meddled in the US Election, particularly through the use of Twitter advertising.

As has been widely reported, on 26th October 2017, Twitter took the bizarre step of banning RT from advertising on their medium based on the rather spurious claims of seeking to interfere with the US Election citing US National Intelligence Report which accuses RT of peddling propaganda through airing content such as a documentary about the Occupy Wall Street movement which “framed the movement as a fight against ‘the ruling class' and described the current US political system as corrupt and dominated by corporations”, running “anti-fracking programming, highlighting environmental issues and the impacts on public health”, and “characterising the United States as a ‘surveillance state’, [alleging] widespread infringements of civil liberties, police brutality, and drone use.”

It appears that in many areas of the Intelligence Report, RT is being held responsible for US Election meddling because they made valid, insightful and legitimate criticism of the United States. Simply producing journalism could now be the Orwellian litmus test for election meddling.

RT are being held as propagandists by the US intelligence community because, on occasion, they offer journalism in the United States that mainstream American news outlets all too frequently fail to offer — adversarial, with systematic focus. If RT’s journalistic standards were demonstrably worse than most other British or US news media then these sorts of criticisms about RT may hold more water.

Was RT’s use of Twitter advertising a big factor on the US Election? In response to Twitter’s ban and claims, RT helpfully provided their most promoted tweets during the three months leading up to the US Election, the costs they paid, and how many users clicked on the promoted link:

“Out of 70 top promoted tweets only five had links to the US presidential campaign. The story about Donald Trump addressing Hillary Clinton’s email scandal in a presidential debate set RT back some $185. Another one — on the prospects for oil prices in case Trump decides to scrap the Iranian nuclear deal — has cost just over $204. Some $141 was spent to promote a tweet on Max Keiser’s describing Trump’s presidency as an acid test for the US constitution. Also, RT spent some $109 and $92 on tweets about Jill Stein talking about America’s two-party system and Trump being rushed off stage in Reno respectively. Just 5 tweets out of 70.”

To put it bluntly, that the allegations of Russian linked election-meddling using social media have been covered in such detail and prominence by The Guardian and other mainstream sources is more damning of the journalistic credibility of the US and the UK mainstream press than it should be of the Kremlin’s involvement in election meddling through the purchasing and use of social media and social media advertisements, particularly in the added context of the US’s extensive history of election meddling, and the sponsorship of huge propaganda campaigns and coups on foreign soils.

So that covers some of The Guardian’s Russophobic focus on Fake News, but The Guardian also has a domestic preoccupation with it too, where they generally argue, allude to, or denigrate sources outside of mainstream sources as being Fake or non-legit. There appears to be an active attempt by The Guardian to push the idea that the news in established newspapers, particularly theirs, is reliable and credible, and other sources, particularly new sources, are not reliable or credible.

Websites which frequently criticise mainstream news are referred to in Guardian headlines or stories as “DIY political websites”, “alternative news” and “alt-media”. The Guardian explains that those running these sites or writing for them do so on a “semi-professional” basis. Readers are clearly led to believe there is a stronger level of unreliability to them because the sites are, according to The Guardian, run from “laptops and bedrooms”.

Whilst some of this is true, there is also a big difference in the nature and quality of these websites’ content, which makes general commentary about them very difficult and highly problematic in of itself. The websites also have notably variable levels of journalistic integrity. For example, there are strong indications — albeit from a disgruntled ex-employee, Alex McNamara, who is still owed substantial fees according to his claims — that, one of the websites featured, Evolve Politics, have a disappointingly and disgustingly low level of journalistic ethics. They have not signed up to any regulator, nor outline how they handle complaints or what editorial guidelines they subscribe to except to “actively encourage” writers to join the national union of journalists. However, all of this doesn’t take away from the fact that a lot of their media criticism, particular from their assistant editor, Matt Turner, who also writes for the Independent and other sources, is well-argued, well-reasoned and accurate.

The Guardian’s feature on ‘alt-media’ is also seemingly inaccurate in several areas. Many of the websites discussed are professional outfits, but The Guardian does not make any distinction to the ones that are. For example, Thomas G Clark of Another Angry Voice, a website discussed in The Guardian feature, states on his own blog:

“My blog is supported by “Pay as You Feel” donations meaning that I have the time to write and create infographics full time.”

In other words, he blogs as his profession.

The Guardian also allude to, these “alt-media” sites as being one side of the same coin — the other side being alt-right and white nationalist sites, some which write fabricated stories, and many of which write one-sided articles.

Unfortunately, The Guardian has not made anywhere near enough effort to actually support the claims or allusions they make about these Left-leaning “alternative news sites”, nor has there been anywhere near sufficient attempts to assess the legitimacy of many of the grievances and criticisms that these “DIY political websites” have or make about The Guardian and other mainstream news outlets, a point Matt Turner makes in a post on Medium.

It is frankly sheer arrogance that so many mainstream newspapers, including self-proclaimed liberal newspapers like The Guardian, consistently reject to seriously consider the quality of their journalism as being a significant reason for a general drop in regular daily readership (at least in print newspaper sales) and a rise in Fake News in the first place. More significantly (and interwoven as I will examine), they drastically fail to consider their own systematic biases and how that negatively affects their output.

And why might that be? To put it simply, it’s not in their interests to. This will become clearer in Part Two of the essay which will examine The Guardian through Herman and Chomsky’s Propaganda Model, introduced in their seminal 1988 book, Manufacturing Consent: The Political Economy of the Mass Media.

But before I close on the first part of this four part essay, it’s important to make clear that I am not attempting to downplay the influence that Fake News, Big Data and interrelated allegations around Russian meddling may have wielded on more recent elections, and certainly may wield on prospective future elections.

There are legitimate and serious concerns that need to be addressed, particularly for future elections. However, it’s vitally important that the concerns and these subjects are reported on rationally and fairly in our news media, and it’s vitally important that the reporting places them into a more systematic analysis, so as to help us to address the deeper-rooted issues.

In these respects, I firmly believe from the evidence I’ve presented, that much of mainstream (corporate) journalism, such as that in The Guardian, on these very subjects has failed and is continuing to fail.

Postscript (23/11/17)

News is hard to keep up with. As you might expect, it changes all the time.

Since I wrote this essay, the sort of censorship that I mentioned was already occurring — and was a potential danger of an irrational and imbalanced Russophobic media focus — has expanded further with the announcement that Google are now taking the steps of de-ranking RT and Sputnik from the Google News service and search results. Google’s Executive Chairman, Eric Schmidt claims that they would not be removing RT and Sputnik entirely from the Google News platform “because that’s not how we work” and that he is “very strongly not in favor of censorship”, as if somehow the sort of de-ranking he mentioned would not be the effective result.

As the journalist and tireless media analyst, Adam Johnson, tweeted:

--

--

Mansour Chow

Essays, articles, poetry and fiction. FourFourTwo, Hobart, The Learned Pig, Alquimie, The Monarch Review, Fire & Knives, The Moth, Firewords Quarterly, etc.