WASHINGTON (Sinclair Broadcast Group) — Facebook is once again in the spotlight, this time for its admission that fake accounts, likely operated out of Russia, purchased roughly 3,000 ads on hot-button political issues during the presidential election season.
The fact that Facebook may have provided a platform for foreign interference in the U.S. election has led to a renewed debate over whether the social media giant and its ilk should be regulated.
After years of treating the internet and digital platforms as the Wild West, some are considering bringing in a sheriff.
Earlier this month, Facebook issued a statement acknowledging it sold approximately $100,000 worth of ads from June 2015 through May 2017 to fake accounts operated out of Russia.
The ads focused on "amplifying divisive social and political messages" on issues like race, immigration, guns, gender and sexuality and at least one-quarter were geographically targeted.
When the social media company reported its findings to congressional investigators looking into Russian election meddling, they reported an additional $50,000 in ad-buys from Russian accounts. By American accounts, $150,000 is a small purchase during an election year, but under U.S. election law foreigners are strictly prohibited from weighing in on campaigns.
According to reports by CNN and the Wall Street Journal last week, Special Counsel Robert Mueller, head of the investigation into Russia's 2016 election interference, requesting copies of the ads and information on the $150,000 in advertising campaign targeting U.S. voters.
Facebook said it is complying with the requests, but the public, which was exposed to the ads, still has no idea what they looked like, where they were targeted and exactly what message they were promoting.
Facebook's admission has led to an outpouring of calls from lawmakers and regulators on both sides of the aisle who are eager to crack down on foreign interference in U.S. politics.
The head of the Senate Intelligence Committee, Richard Burr (R-N.C.) told reporters this week that it "would be very appropriate" for Facebook reps to testify publicly on the matter.
Ranking Democrat on the Committee, Mark Warner of Virginia, agreed that Facebook should testify on the Russian ad-buys, suggesting that the $150,000 was just "the tip of the iceberg."
Facebook could appear alongside Twitter, who Warner said is scheduled to address the Intelligence Committee soon to discuss whether Russian nationals used the company's advertising platform to promote divisive political messages and influence the outcome of the 2016 election.
Regulators at the Federal Election Commission also decided to reopen public debate on the issue. After the FEC dismissed the matter of foreign nationals spending money in U.S. elections back in February, Democratic Commissioner Ellen Weintraub insisted the Commission reconsider the issue based on "new evidence" provided by Facebook.
"The fact that political ads were being placed and nobody knew where they were coming from is an issue that is squarely within our jurisdiction," she said at a September 14 hearing, insisting the FEC should act to prevent foreign election interference in the future.
Under FEC law, foreign nationals are prohibited from spending money on U.S. elections.
She followed up her comments in an opinion piece for the Washington Post, saying, "For our democracy to work, the American people need to know that the ads they see on their computer screens and in their social media feeds aren’t paid for by Russia or other foreign countries."
But even the possibility of limited FEC regulations is uncertain, according to Republican Commissioner Lee Goodwin, who cited Facebook's report that the "vast majority" of the content advertised didn’t reference the election or a candidate.
Regulating that kind of influence campaign falls to the Department of Justice, which regulates foreign propaganda and advertisements under the Foreign Agent Registration Act (FARA).
The uproar over political ads is not Facebook's first rodeo. Talk of regulating the Internet giants or even censoring speech on social media has ebbed and flowed in recent years with issues like terrorist content and online recruiting, fake news and troll farms, live-streaming violence, or the process for selecting trending stories.
More recently, ProPublica put a spotlight on Facebook's questionable ad-targeting algorithm which allowed advertisers to target users who expressed interests in topics like racism and anti-Semitism. After they were contacted about the issue, Facebook removed the categories identified in the report.
"I think we are seeing a lot of efforts aimed at the tech companies in a wide variety of contexts, whether its demands for information on its ads, or terrorist propaganda," Chris Calabrese, vice president for policy at the Center for Democracy & Technology.
Before the government comes in with new regulations that could impact millions of users around the country and billions around the world, they need to consider whether their policies will produce the results they seek.
"We need to think about what types of regulations are being pushed, whether they'll work, and whether they're appropriate," Calabrese noted. "When it comes to regulating peopole's ability to speak on these platforms, then you're in much more dangerous and problematic terrain."
Additionally, companies like Facebook, Google and Twitter, who have been criticized in the past for how their platforms have been used, regularly preempt government action and adapt their company policies in response to public pressure. In a number of high-profile cases, the response from companies has meant they are able to continue operating with a free rein.
After the uproar over fake news, Facebook partnered with outside fact-checkers to curb the spread of false information. All three internet giants implemented a new policy to block unreliable sources from earning ad revenue for false "click-baity" content.
Social media platforms responded to government concerns about online radicalization by shutting down thousands of terror-linked accounts and working to eliminate violent content aimed at recruiting or radicalizing individuals.
Brent Skorup, from the Technology Policy Program at George Mason's Mercatus Center, explained that lawmakers and other officials have largely resorted to "social pressure," encouraging companies to adopt higher standards to protect users' interests without applying a heavy regulatory hand.
"I don't see an imminent threat of regulation, but there certainly is a growing appetite," he said. "I hope it just remains a kind of public pressure on companies to improve their processes."
Between politicians on the left and the right, there is a general overarching concern about imposing too many restraints on social media companies, especially after the Supreme Court ruled that internet speech is protected under the First Amendment. However, the issue is complicated where it concerns foreign actors, who are not protected.
"These are new legal policy issues for lawmakers and courts to consider," Skorup noted.
The issue split the FEC back in 2011, when they attempted to regulate online political advertising but could not reach a consensus. Skorup anticipates that deadlock will likely continue and the tech giants will continue to avoid regulation, even in light of the new evidence from Facebook.