Israel-Hamas information war challenges media, public
| View caption Hide caption
All wars are also information wars. False and misleading online images from Israel and Gaza have lit up social media. In the instant-news era, verification presents a dilemma for journalists.
The Oct. 17 explosion at al-Ahli Arab Hospital in Gaza City was a case in point: Hamas claimed that an Israeli airstrike had killed hundreds of civilians, a claim that ricocheted across global media outlets. Israel quickly denied this claim and said a militant-launched rocket had misfired and landed on the site. In subsequent days, visual evidence emerged to support Israel’s version of events, which the U.S. Department of Defense also supported, citing its own intelligence. News organizations have tried to verify the source of the explosion by comparing videos and asking munitions experts to examine photos of the site.
But it’s easy to manipulate the news media with false claims, knowing that the pressure to be first with breaking news means a rush to report before the facts are clear, warns Peter Singer, a professor of practice in the Center on the Future of War at Arizona State University who studies cybersecurity. “Both the media and the social network firms (or at least their owners) seem to have learned too little when it comes to the deluge of online misinformation and deliberate disinformation that is now the norm in conflicts,” he says via email.
The claims of an alleged Israeli airstrike on a hospital – which is protected from military attacks under the Geneva Conventions – had immediate political and diplomatic consequences: Protests erupted in several Arab countries last week, and a planned summit between President Joe Biden and leaders of Arab countries was canceled. On Monday, The New York Times wrote in a substantial editor’s note that it “should have taken more care” with its initial reporting on the incident, which “left readers with an incorrect impression about what was known and how credible the account was.”
Fog of war – then and now
For much of human history, civilians have been poorly informed or misled about the course of conflicts, including at home. The use of propaganda during wartime, and warnings about its effects, also has a long lineage: English writer Samuel Johnson wrote in 1758, during the Seven Years’ War between Britain and France, that war falsehoods diminish “the love of truth.” In a similar vein, Sen. Hiram Johnson of California, an isolationist who opposed U.S. entry into World War I, was reported to say that “the first casualty when war comes is truth.”
View caption Hide caption
Compared with a century ago, civilians have access to reams of online data and images that, in theory, offer a counterpoint to propaganda by governments and warring factions. Today, much of this information is disseminated by social media platforms owned and controlled by U.S. tech companies. In 2011, when anti-government protests began to spread across Arab countries, Twitter (now called X) and Facebook offered both an uncensored space to organize and a window for the world into the protests that became the Arab Spring.
But to follow the 2023 Gaza conflict on X is to peer into a “fun house mirror” in which almost nothing can be trusted, says Mathew Ingram, chief digital writer at the Columbia Journalism Review. “Most people felt that most of what they were getting through Twitter was credible information … and the assumption now is that it’s not true.”
Under Elon Musk, who bought Twitter last year for $44 billion, the social media platform has disbanded teams who worked on combating misinformation and hate speech. It has also removed blue check marks from the accounts of politicians, celebrities, and other public figures whose identities had been verified and instead sold check marks to subscribers. Critics say these accounts, whose posts are amplified by X and are then eligible for payments if they go viral, have been among the most active spreaders of misinformation about Gaza, presumably for financial gain.
Using X as a news source “is a lot more work than it used to be,” says Mr. Ingram. “The account could be fake. The information could be fake. The photo could be fake.”
Last year X launched a crowdsourced fact-checking service called Community Notes that is designed to single out suspect posts. Its effectiveness and credibility had been questioned, though, even before the conflict in Gaza. Other social media platforms, such as Facebook and TikTok, have also struggled with misleading posts, as well as disinformation campaigns. An executive from Cyabra, an Israeli bot-monitoring firm, told Reuters that it had uncovered more than 40,000 fake accounts sharing pro-Hamas content and that many had been created long before the attack. “The scale suggests there was pre-prepared content and manpower into getting it out,” said Rafi Mendelsohn, Cyabra’s vice president.
A human problem, not a technical one
While many blame Mr. Musk for weakening X’s guardrails against misinformation, the sheer volume of false and misleading content presents a stiff problem for would-be monitors. “Even if we had a million fact-checkers, I’m not sure we’d be able to solve this problem. It’s a human problem, not a technical problem,” says Mr. Ingram.
In Finland, media literacy is taught in elementary schools. And learning how to check online images to verify their provenance is fairly straightforward, notes Shayan Sardarizadeh, a BBC reporter who specializes in verification of online information. But even professional fact-checkers have expressed concern at how many falsehoods from Israel and Gaza have pinged around the world in recent weeks.
News consumers are often vulnerable to disinformation from war zones because they are primed to believe claims that fit their worldview, says Professor Seib, author of “Information at War: Journalism, Disinformation, and Modern Warfare.” Even if news organizations inject caution into their coverage and seek to set the record straight, however long it takes, they are competing with an unfiltered stream of digital information. “The public has to train itself to say, ‘Here’s what so-and-so says; here’s what the other side says. Let’s wait a minute,’” he says.