Meme Wars: Who’s Attacking Who?

In the midst of the Israel-Palestine conflicts of May 2021, unverified screenshots are characterized, amplified, and spread by user-generated sources.

by Mitch Chaiet and Emily Berk

The difference between peer-to-peer messaging (tweet) versus top-down distribution (What’s Happening)

In America, the classic AM Radio model to profitability is quite simple — scare a bunch of boomers with baseless claims, and they will engage with your radio show. The more people you have listening, the higher number your audience count is. You can use that number to convince advertisers to pay you — the more people listening, the more people who will listen to an ad, the more you get paid.

If you are in middle America, listening to a radio show, it is very hard to verify claims about the north or southern borders due to the fact that you’re nowhere near them. Therefore, you’re more likely to believe your own trusted sources, like the content you listen to and the content your friends and family send you, since you can’t verify events with your own eyes.

Fear from afar similarly emerges in digital spaces, such as user-generated claims which circulate on encrypted messaging apps. People in suburban America like to scare themselves into fearing that “mobs” of “protesters” they might see in a neighboring city are coming to their own secluded neighborhood. It’s simply an emotional interpretation of events viewed online when there’s no tangible, physical event to view for themselves.

Interestingly, this same pattern of fear-from-afar appears in different countries, across different media formats (image, audio, text), in many different languages, and the major pattern simply involves user-generated fear. Major media outlets end up being responsible for fact-checking the claims that your scared family or friends might send you.

Demographics and stances are the two key elements that define any sort of meme war. If the demographic you fall into is “Israeli” or “Palestinian” that might define your stance as “Pro-Israel”, “Anti-Israel”, “Pro-Palestine”, or “Anti-Palestine”. If your demographic is “American” or “European” then you might have less personal connection to the conflict, so your stance will be affected by the content which is most accessible to you. These labels define you and your feed algorithmically, and are used to serve you the content you engage with.

These labels work with any kind of audience. For instance, researchers who previously weren’t soccer fans found themselves being targeted as soccer fans just for researching the people who were already soccer fans. They became part of the demographic just by looking into the subject:

“I became a de facto soccer fan, based purely on my data trail. Even though there was a specific motivation at play, and even though it was a specific work-related goal that drove me to this madness, the label of “soccer fan” is now a part of my identity that persists today, its remnants visible in the automatically generated recommendations and ads that are served to me online.”

— Prof. Erin Reilly, The Edison Project, 2017

Algorithmic amplifiers come in all shapes and sizes. Large meme pages with millions of followers, for instance, are generally expected to be non-political and only humorous. Individuals who follow these accounts do so because they have been converted from an audience member to a fan. If you see a random meme in your feed that makes you laugh, you’ll likely go follow the page for more:

“We all start out as audience members, but sometimes, when the combination of factors aligns in just the right way, we become fans. And most likely, you’re not a fan of everything, but I bet you’re a fan of something.”

— Prof. Erin Reilly, The Edison Project, 2017

When an audience is expecting non-political and only humorous content, content curators/page admins can feel limited in the personal stances that they can amplify. Audiences often spectate and engage with pages in a passive manner. When the individuals running these accounts break character and repost content relating to a divisive conflict, audience polarization can occur.

For instance, Instagram meme page @3.1415926535897932384626433832 has 4.8 million followers, and the curator constantly reposts other’s content. The entire page is filled with screenshots of tweets — non-original content designed to humorously and passively engage with the page’s gigantic audience.

@3.1415926535897932384626433832 on Instagram reposted an infographic by @theslowfactory — a Pro-Palestinian organization.

The admin wasn’t hesitant about promoting content related to the conflict. “i know this is a meme account, but genocide and ethnic cleansing need to be talked about” read the caption, produced by the page admin. The day the page reposted a Pro-Palestinian infographic, they lost 3.9k followers:

IGBlade Data indicates that ~3,900 people unfollowed @3.1415926535897932384626433832 on Instagram after the non-political meme page reposted an infographic by @theslowfactory — a Pro-Palestinian organization.

In contrast, Instagram meme page @biglawboiz was adamant about their pro-Israel stance and lost followers as well. Posting pro-Israel content cost the page a net loss of 372 followers, according to a screenshot posted to the page’s story.

Discrepancies between user-generated content and top-down media content can often cause confusion surrounding attacks, natural disasters, or other large, divisive social media events. Trust and characterization of narratives in between the origin point of a piece of content and your feed determine what stances are most likely to reach you. User-generated pages, often focused on a single stance, characterize content alongside professional journalists and fact checking organizations. When user-generated content is amplified by user-generated pages with large audiences, it can be hard for those viewing the conflict vicariously from afar to determine what’s happening and which way to side. What happens when random people start fact-checking?

Can you tell which one is real? Which one is fake?

Were you right? Click here for the answer!

When pages solely dedicated to a specific stance promote content related to that stance, third-party content used by the page can be characterized to their followers easily. Instagram page @the.israel.files used two tweets side-by-side to create a third piece of new content, a user-generated fact check which they posted to their page. The page is dedicated to posting pro-Israel content, so the fact check was quickly amplified by it’s fanbase.

However, the page itself is not representative of a professional newsroom — their own fact checks are user-generated, and we don’t know what their vetted standards of journalism are. Therefore, @the.israel.files are submitting themselves to the same criticisms and trust/mistrust professional fact-checkers face. When followers of the page amplify the user-generated fact checks asking their followers to “make sure the articles and posts you’re sharing are accurate” while the fact-check and the page itself are user-generated, it can be hard for end-viewers to verify the characterized content used in the arguments. This can undermine a valid, user-generated fact-check.

An Instagram story circulates user-generated fact checks affirming a particular stance. The fact checks themselves are user-generated, while the person amplifying them asks their viewers to verify them on their own.

The Instagram page @the.israel.files used two tweets side-by-side to create a third piece of new content — and claimed one tweet is “real” and one tweet is “fake”. The phenomenon they are referring to is called False Context, “used to describe content that is genuine but has been reframed in dangerous ways” by First Draft News. @the.israel.files used a “fake” week-old tweet from @alikeskin_tr and a “real” seven-year-old tweet from @MeredithFrost in order to promote a pro-Israel argument. Both tweets exist.

This is not an isolated phenomenon that occurs across both public-facing social media accounts and private chat rooms. It is worth noting that terms such as “fake news,” “disinformation,” etc. have gained significant traction outside of the traditional research space over the past year. It seems as though everyone wants to be the arbiter of truth, even meme pages. While the intention is to bring awareness to certain issues, this can be harmful when the information being shared is not entirely accurate. We are not claiming to be these arbiters of truth nor fact-checkers, but here is a solution we can offer to help combat the spread of disinformation:

Sourcerer 🔀 is a text line that can automatically search Twitter for the original tweet. If you text it a screenshot of a Tweet, it will source the original Tweet. We built it so you can verify the authenticity of Tweets you see online. Try it below:

Here a user took screenshots of tweets found on the Instagram page The Israel Files and texted them into Sourcerer 🔀 . Our tool was able to find the links to the original tweets. Not shown: Sourcerer 🔀 sends you an option to add itself into your contacts so you can easily text in your next screenshot!

We offer boutique intelligence, technologies, and data for cross-platform analysis of coordinated inauthentic behavior.