Google has once again been called out for algorithmically encouraging the spread of dubious, politically charged speculation and misinformation around a topical news event.
In the latest instance of the algorithmic amplification of misinformation, the news event in question is a shooting in a Texas church on Sunday. Authorities have identified 26-year-old Devin Patrick Kelley as the perpetrator.
Users of Google’s search engine who conduct internet searches for queries such as “who is Devin Patrick Kelley?” — or just do a simple search for his name — can be exposed to tweets claiming the shooter was a Muslim convert; or a member of Antifa; or a Democrat supporter…
The core issue is that Google is prominently placing unverified claims high up in its hierarchy of relevance-ranked data — aka “Google search results” — which the company has itself previously likened to a library index of truthful data. (Albeit it’s demonstrably a pretty skew-able index when your passing “oracle of truth” can be Julian Assange’s Twitter feed… ).
The section where this content is being embedded within Google’s search results is powered by its access to Twitter’s firehose of tweets, combined, it says, with its own ranking algorithms — which apparently also favor the kind of wild, clickbait-y and unverified claims that have been shown to spread like wildfire on Facebook (aka fake news).
The dynamic handful of tweets that Google’s algorithms choose to showcase within search results are sometimes labeled “Popular on Twitter” (or else just “on Twitter”).
They do appear below a “Top Stories” section, which sits at the top of results. But the Twitter content is still very prominently displayed near the top of Google search results — meaning internet searchers looking for genuine information around a developing news story may well be being unwittingly exposed to entirely unverified claims, including maliciously motivated, politically charged misinformation.
(On that wider topic, Google, Twitter and Facebook have all been giving evidence to Congress this month about how their platforms were — and still are being — manipulated as part of Russian political disinformation campaigns targeting U.S. voters.)
Asked about the Texas-related misinformation it’s algorithmically surfacing now, a spokesperson for Google provided us with the following statement: “The search results appearing from Twitter, which surface based on our ranking algorithms, are changing second by second and represent a dynamic conversation that is going on in near real-time.
“For the queries in question, they are not the first results we show on the page. Instead, they appear after news sources, including our Top Stories carousel which we have been constantly updating. We’ll continue to look at ways to improve how we rank tweets that appear in search.”
At the time of writing Twitter had not responded to a request for comment.
It’s not clear to what extent Twitter’s platform is feeding Google’s ranking algorithms at this point, i.e. via the dynamics at play on its own platform that can amplify certain tweets — such as bot networks working together to retweet particular politically charged content to try to get it trending. But it seems likely that the workings and dynamics of both platforms are at least partially involved in surfacing this content.
We confirmed that various tweets are being surfaced via Google searches by conducting some of our own searches on the topic. We were shown various additional/different tweets — including suggesting the shooter was an atheist; or Antifa; as well as tweets from some established news outlets…
Screenshot 2017-11-06 07.37.18
Screenshot 2017-11-06 07.37.10
Google’s ‘On Twitter’ section
devin patrick kelley
Safe to say, it’s a bundled mix of claims from all sorts of sources that requires the person exposed to the content to have the critical faculties to sift “potentially accurate” from “at best highly speculative” or even “out-and-out nonsense.”
A month ago we reported on a similar issue with Google search results following another U.S. mass shooting when Google search results were distributing unverified claims (from 4chan) directly within its Top Stories section, i.e. not just via the “on Twitter” segment of search results, which was arguably even worse.
Though it’s still a pretty fine line to expect the average internet user to be able to critically and dynamically sort a random selection of tweets about a topical news event that are actively being lifted high into their field of view — alongside other types of content which Google also implies answer the core search query.
Safe to say, the algorithmic architecture that underpins so much of the content internet users are exposed to via tech giants’ mega platforms continues to enable lies to run far faster than truth online by favoring flaming nonsense (and/or flagrant calumny) over more robustly sourced information.
Even as the content that internet users are being exposed to has become a very blurry blend, as increasingly dominant tech platforms algorithmically mix information that might correctly answer a query/need alongside viral/provocative claims intended to incentivize clicks/engagement — and thus generate more revenue for the underlying tech platform.
Such automated and mixed motivations are writ very large indeed in our modern digital “age of misinformation.”