Inauthentic Behavior

Twitter has come under fire recently for some of its enforcement actions, most notably those against the president. Although taking on the White House was a first for Twitter, its move against the bogus white nationalist accounts has precedents.
"We're talking about inauthentic behavior," said Alex Engler, a fellow at Brookings, a nonprofit public policy organization in Washington, D.C.
"That has been prohibited on these platforms for some time," he told TechNewsWorld. "That's not unique to Twitter. Facebook has also taken down inauthentic behavior."
It was not the substance of the account posts that prompted Twitter to act, suggested Vincent Raynauld, assistant professor in the department of communication studies at Emerson College in Boston.
"When it comes to the white nationalist account, it's not what's published on the account. It's about transparency -- the identity portrayed," he told TechNewsWorld. "There is clear obstruction of who is behind the account."

Twitter Is No Town Square

Twitter does muzzle some of its users' content from time to time, however.
"Twitter threatens the ability of users to post content all of the time, thanks to content moderation policies," said Mathew Feeney, director of the project on emerging technologies at the Cato Institute, a Washington, D.C. think tank.
Those policies include restrictions on bots, and bans on all kinds of legal content, such as images of graphic violence and pornography.
"Twitter's understandable use of content moderation policies has no impact on the users' legal right to free speech, even if it results in content being removed," Feeney told TechNewsWorld.
First Amendment rights don't extend to platforms like Twitter and Facebook, noted Karen North, director of the Annenberg Program on Online Communities at the University of Southern California in Los Angeles.
"The First Amendment protects our right to free speech in the town square and the steps of city hall, but not in someone's living room or private business," North told TechNewsWorld.
"In a private business like Twitter, the rules are the rules of Twitter, not the First Amendment" she continued.
"Just as restaurants, a club or a business could have a code of conduct or dress code, the social media platforms are private businesses, and when we join them, we agree to their code of conduct," North explained.
"It's not illegal to say things that glorify violence in the town square, but it is against the rules of Twitter," she said.

Platform or Publisher?

Some critics of Twitter maintain that its aggressive enforcement of its terms of use makes it more of a publisher than a platform, and as such it should be subject to the same rules.
"That distinction isn't legally meaningful," Cato's Feeney said.
"Section 230 of the Communications Decency Act, which is the law at the core of the social media content moderation debate, does not make a distinction between 'publishers' and 'platforms,'" he pointed out.
"Twitter can be held liable for publishing content -- such as fact checks or a post on Twitter's blog -- but not for the vast majority of content posted by users," Feeney maintained.
"Even traditional publishers enjoy Section 230 protections, as in The New York Times comment section," he said.
What social media platforms are doing now is no different from what they've been doing for years, Brookings' Engler added.
"They're enforcing standards against certain kinds of content that are a problem," he observed.
"A call to violence isn't a new standard," Engler continued. "The difference is that Twitter is holding Trump accountable in the same way. That's what's novel."