Technologists Increasingly Alarmed with Filter Bubbles, Misinformation, Augmented Reality

Technologists Increasingly Alarmed with Filter Bubbles, Misinformation, Augmented Reality

Early on in the 2016 US Presidential election, CNN pundit Van Jones mused that then-candidate Donald Trump had harnessed social media the way his predecessors had harnessed other new and emerging technologies. He equated Trump’s use of social media with JFK’s mastery of television, and remarked on how Trump managed to navigate social media in a savvy way that gave him the edge and made his opposition look dated and dull.

In hindsight, Jones’ prescience was more profound than he or anyone at the time even knew. The impact of social media and internet culture on the American social consciousness was far deeper than most realized. Not only did the arguments around the election play out in cyberspace, but nation states themselves invested in grand attempts to leverage social media to try and influence the election in their favor. Much analysis also turned to how Trump’s underestimated “troll army” affected a drastically out-sized impact in dialogue by harnessing the short, effective message of the meme.

The unexpected effectiveness of social media in the election has since drawn the attention of internet and technology researchers, who have been analyzing the way information is actually shared on the internet. Many of these researchers and professionals are now saying that the internet is being accidentally engineered to perform in unexpected and possibly very negative ways that promote misinformation and extreme ideas. By rewarding clicks rather than content value, developers have essentially built a machine that magnifies sensational and outrageous content, while burying formal or nuanced media. In the click-optimized environment of the internet, shocking or emotional media (whether it is true or not) crowds out fact and promotes rumors and untruths. This is partially a result of the problem of the filter bubble, where personalized newsfeeds and promotional algorithms on social media are engineered to show you content that you will like and share, not necessarily content that is factually correct or intellectually valuable. This has sociologists and technologists concerned about our ability to cope as a society with essentially differing realities and alternative facts.

In October of last year, ex-Facebook executive Chamath Palihapitiya lamented the brutal and unexpected effect of Facebook’s algorithms. In an event at Stanford, Palihapitiya said he felt “tremendous guilt” for what he had created:

“The short-term, dopamine-driven feedback loops we’ve created are destroying how society works,” he said, referring to online interactions driven by “hearts, likes, thumbs-up.” “No civil discourse, no cooperation; misinformation, mistruth. And it’s not an American problem — this is not about Russians ads. This is a global problem.”

Similar warnings are brought forth in the alarming talk given by Zeynep Tufecki at TEDGlobal in New York, where she told the story of how YouTube’s algorithm promotes increasingly extreme viewpoints and narrow mindedness:

…in 2016, I attended rallies of then-candidate Donald Trump to study as a scholar the movement supporting him. I study social movements, so I was studying it, too. And then I wanted to write something about one of his rallies, so I watched it a few times on YouTube. YouTube started recommending to me and autoplaying to me white supremacist videos in increasing order of extremism. If I watched one, it served up one even more extreme and autoplayed that one, too. If you watch Hillary Clinton or Bernie Sanders content, YouTube recommends and autoplays conspiracy left, and it goes downhill from there.

Well, you might be thinking, this is politics, but it’s not. This isn’t about politics. This is just the algorithm figuring out human behavior. I once watched a video about vegetarianism on YouTube and YouTube recommended and autoplayed a video about being vegan. It’s like you’re never hardcore enough for YouTube.

That is to say, YouTube doesn’t naturally promote a broad range of ideas or a balanced diet of information. Rather, YouTube’s algorithm encourages a spiral of content that gets progressively more and more sensational, narrow and extreme. Tufecki then goes on to describe how secret Facebook experiments had drastic impacts on voter turnout, and ponders the possibility of using such power to disrupt natural democracy by selectively mobilizing voters. The overall lesson, of course, is that one’s social media use may have much more significant impacts on what one thinks and does than one consciously realizes, and that much of that impact may be driven by algorithm-promoted misinformation or (in the worst case) even purposeful disinformation.

Yesterday, BuzzFeed (ironically a notorious source of clickbait and a prime beneficiary of sensationalist algorithms) published an interview with technologist Aviv Ovadya. Aviv sees terrible potential in the coming intersection of augmented reality and the internet’s inability to deal with misinformation. Because the internet rewards only likes, and not truth, Ovadya says

It became clear to him that, if somebody were to exploit our attention economy and use the platforms that undergird it to distort the truth, there were no real checks and balances to stop it.

Poignant in light of the controversies surrounding both Trump’s social media impact and fake comments submitted to the FCC during the net neutrality debate, Ovadya ponders the possibility of technology-driven political deception in the near future:

…increasingly believable AI-powered bots will be able to effectively compete with real humans for legislator and regulator attention because it will be too difficult to tell the difference. Building upon previous iterations, where public discourse is manipulated, it may soon be possible to directly jam congressional switchboards with heartfelt, believable algorithmically-generated pleas. Similarly, Senators’ inboxes could be flooded with messages from constituents that were cobbled together by machine-learning programs working off stitched-together content culled from text, audio, and social media profiles.

The article goes on to highlight burgeoning technology that can used to try and warp or create reality, including new face-swapping technology ‘deepfakes’ and similar tools for audio. It ponders the consequences of a world where anyone can make it “appear as if anything has happened, regardless of whether or not it did.” Needless to say, this opens up a whole world of possibilities for differing or competing versions of reality, or rejection of reality altogether.

All of this serves of course to highlight the growing importance of the internet in shaping our perceptions and realities, a place we haven’t really thought of it before. We are just beginning to see the ways the internet and social media can be used to misinform and manipulate. It recollects the first ways the internet was used for crime: unexpected. The internet was never designed to be secure in the first place. It was never intended to be secret; it was intended to be loud, open and free. Security has been retroactively shoehorned into systems and protocols never intended to be capable of hiding information, as we have expanded the uses for the internet and forced it into more and more applications never even imagined by its developing pioneers.  In a similar way, we are now seeing the algorithms we built for likes and ads being used for deception and manipulation. We never thought, when we figured out how to accurately suggest videos on Netflix, that the same technology would be used to push white supremacy. We never thought that the algorithms for sharing Grandma’s pictures on Facebook would eventually serve to weaponize false information or be used to impact voter turnout.

Now that we have created these tools, we must begin the arduous process of dealing with them.