Saturday, October 7, 2017

Combating fake news may force big changes at Facebook, Twitter


Laurent Belsie, csmonitor.com [Original article contains links and additional illustrations.]

image from article

OCTOBER 6, 2017 —The day after the 2016 presidential election, Facebook CEO Mark Zuckerberg was asked whether social media had contributed to Donald Trump’s win.

“A pretty crazy idea,” he responded at the time. But after months of internal sleuthing by media organizations, congressional investigations, and Facebook itself, the idea doesn’t look so far-fetched.

“Calling that crazy was dismissive and I regret it,” Mr. Zuckerberg wrote in a Facebook post last week. “We will do our part to defend against nation states attempting to spread misinformation and subvert elections. We'll keep working to ensure the integrity of free and fair elections around the world, and to ensure our community is a platform for all ideas and force for good in democracy.”[JB emphasis]

It is a startling turnabout. After years of defending themselves as communications networks, whose sole aim is to foster dialogue, social media companies like Facebook and Twitter are under increasing pressure to take responsibility for the content they carry. Search-engine giant Google is under similar pressure to reform after it, too, has promoted fake news stories, including extreme right-wing posts misidentifying the Las Vegas shooter and calling him a left-winger.

The proliferation of fake news is forcing these companies to rethink their role in society, their reliance on cheap algorithms rather than expensive employees, and their engineer-driven, data-dependent culture in an era when they are increasingly curating and delivering news.

“This is definitely a crisis moment for them,” says Cliff Lampe, a professor and social media expert in the School of Information at the University of Michigan in Ann Arbor. “They’re just trying to do their business. What they don’t understand is that in the huge panoply of humankind, people are going to try to manipulate that business for their own ends.”

Ads linked to Russian group


It’s clear that Facebook was aware that something was afoot with fake campaign stories as early as June 2016, when it detected a Russian espionage operation on its network and alerted the FBI, according to a Washington Post report. More hints of Russian activity popped up in the following weeks. Facebook's lengthy internal investigations have hit some paydirt, after the firm decided to narrow its search rather than try to be comprehensive.

This week, Facebook handed over to congressional investigators more than 3,000 ads that ran between 2015 and 2017 linked to the Internet Research Agency, a Russian social media trolling group.

Some of the ads are drawing particular interest because they targeted pivotal voting groups in Michigan and Wisconsin, where Mr. Trump won narrowly. Investigators will probe to see if the Trump campaign played any role in helping the Russians target those ads.

But experts suspect the company has only scratched the surface. And the problem stretches beyond Facebook.

An unusual Twitter torrent


During the Republican primaries, Ron Nehring noticed something odd about his Twitter feed. The campaign spokesman for presidential hopeful Sen. Ted Cruz could go on cable television and bash any of Mr. Cruz’s rivals without any social media blowback. But when he criticized Trump, his Twitter account would be deluged by a torrent of negative and “extremely hysterical” tweets.

“The tone was always extremely hysterical, not something that I would see from typical conservative activists,” he said at a Heritage Foundation event this week.

It is tempting to say that Russia simply manipulated right-wing social media to support Trump’s candidacy. The reality is stranger than that. While a preponderance of the fake posts promoted Trump or criticized his Democratic opponent, Hillary Clinton, on websites crafted to attract right-wing voters, some of them also appeared on sites catering to left-wing causes, such as Black Lives Matter, and religious ones, such as United Muslims of America.

“The U.S. left (liberal) vs. right (conservative) political spectrum was not appropriate for much of this content,” Kate Starbird, a University of Washington professor and researcher of Twitter fake news, wrote in a blog this spring. “Instead, the major political orientation was towards anti-globalism.”

Different groups defined globalism differently, but it attracted extremists on both sides of the political spectrum.

An overt attack on trust itself?

And beyond the outcome of any specific election, Russia’s aim may be to sow divisions among Americans (and indeed citizens in other countries, especially in Eastern and Central Europe), Professor Starbird says.

In one sense, none of this is terribly new. Americans have at times been virulently divided, for example, during the American Revolution, the Civil War, and the Vietnam War. And fake news has been around since before ancient Rome. Not all of the fake material has an obvious Russian connection.

And some say the Kremlin has meddled in other nations' elections in an attempt to foster distrust in their institutions.

“Since at least 2008, Kremlin military and intelligence thinkers have been talking about information not in the familiar terms of 'persuasion,' 'public diplomacy' or even 'propaganda,' but in weaponized terms, as a tool to confuse, blackmail, demoralize, subvert and paralyze,” the Institute of Modern Russia, a nonprofit think tank in New York, concluded in a 2014 report. “The aim of this new propaganda is not to convince or persuade, but to keep the viewer hooked and distracted, passive and paranoid, rather than agitated to action.”

The social multiplier


What is new is the scale of Russian meddling and the dramatic shift of political dialogue to social networks, which until very recently clung to the idea that enabling unfettered communication by everyone was an unqualified good, even if it meant giving voice to conspiracy theorists, racists, anti-Semites, and Russian provocateurs.

The reach and speed of these networks make it easy for these ideas to spread before they can be debunked. Facebook claims to have 2 billion users, or nearly a third of humanity. During the last three months of the presidential election, the top 20 fake election news stories on Facebook generated more shares, reactions, and comments on Facebook than the top 20 pieces from major news outlets, such as The New York Times, The Washington Post, and others, according to a BuzzFeed News analysis.

Among the most popular fake news stories, one said Ms. Clinton sold weapons to the so-called Islamic State and another one claimed the pope endorsed Trump.

And the meddling continues. Sen. James Lankford (R) of Oklahoma, a member of the Senate Intelligence Committee, said Russian internet trolls last weekend sought to use a recent furor over NFL players and the anthem to further divide Americans.

Part of the challenge lies in these digital giants’ reliance on algorithms to make complex news decisions. Computer programs are cheaper than real-life editors. They also offer political cover.

Facebook has used human editors in the past. But after Gizmodo reported that former employees routinely suppressed conservative news stories from users’ trending topics, Zuckerberg met with conservative editors and moved back to algorithms.

But the algorithms are far from neutral. Until exposed by reporters, they allowed advertisers to exclude minorities from seeing ads and, until last month, target “Jew-haters.” A more subtle and endemic problem is that the algorithms are geared to support social media’s business model, which is to generate traffic and engagement.

In 2014, after protests broke out in Ferguson, Mo., over the police shooting of a black teenager, social media researcher Zeynep Tufekci, at the University of North Carolina, noticed that her unfiltered Twitter feed was full of stories about the incident. But they were nowhere on her Facebook feed. When she disabled Facebook’s algorithm, she discovered her friends were indeed talking about the issue, but the topic was not algorithm friendly.

“It’s not likable,” she told a TedSummit audience last year. “Who’s going to click on ‘Like’? It’s not even easy to comment on.” Instead of the Ferguson protests, a dominant theme in regular news coverage that week, Facebook highlighted the ALS Ice Bucket Challenge, which was far more likely to be shared, she said.

Seeking a balance on oversight

Another challenge is that even as social networks become mainstream purveyors of news, they’re still largely run by engineers who rely on data rather than editorial judgment to choose newsworthy content. That data-first mentality powers profits because it gives customers exactly what they want. But if they want fake news that supports their worldview, is it ethical to give it to them?

“There is nothing unethical about companies delivering a product or service consumers demand,” Daniel Castro, vice president at the Information Technology and Innovation Foundation, writes in an email. Major social networks have policies about hate speech and already limit some adult content. “But that does not mean that we cannot instill more ethics in how users share content, teach people to be more critical media consumers, or create digital spaces where substantiated facts have more authority than unsubstantiated opinions.”

Already, Facebook has developed a specialized data-mining tool that it deployed during the French elections this past spring, helping the company identify and disable 30,000 fake accounts. The tool was used again in last month’s German elections to help identify tens of thousands of fake profiles, which were deleted.

Last month, Zuckerberg pledged to “make political advertising more transparent” on Facebook, including identifying who pays for each political ad (as TV and newspapers already do) and ending the practice of excluding certain groups from seeing ads.

Government has a role to play, too, says Mr. Castro. “Foreign interference in elections, that's the kind of thing we should look closely at ... and prohibit.”

No comments: