Malicious Life Podcast: Section 230: The Law that Makes Social Media Great, and Terrible

Section 230 is the pivotal law that has enabled the rise of social media -while sparking heated debates over its implications. In this episode, we're charting the history of Section 230, from early landmark legal battles, to modern controversies, and exploring its complexities and the proposed changes that could redefine online speech and platform responsibility.

 

Powered by RedCircle

ted-claypool

Ted Claypoole

Partner at Womble Bond Dickinson (US) LLP

I'm a lawyer in Atlanta, Georgia, on the FinTech Team, the Privacy and Cybersecurity Team and leading the IP Transaction Team at Womble Bond Dickinson. For more than 30 years, I have represented clients in regard to software and service agreements, data management including privacy/security/ analytics, payment systems, and technology planning.

ran-levi-headshot
About the Host

Ran Levi

Born in Israel in 1975, Malicious Life Podcast host Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.

In 2007, created the popular Israeli podcast Making History. He is author of three books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.

About The Malicious Life Podcast

Malicious Life by Cybereason exposes the human and financial powers operating under the surface that make cybercrime what it is today. Malicious Life explores the people and the stories behind the cybersecurity industry and its evolution. Host Ran Levi interviews hackers and industry experts, discussing the hacking culture of the 1970s and 80s, the subsequent rise of viruses in the 1990s and today’s advanced cyber threats.

Malicious Life theme music: ‘Circuits’ by TKMusic, licensed under Creative Commons License. Malicious Life podcast is sponsored and produced by Cybereason. Subscribe and listen on your favorite platform:

All Posts by Malicious Life Podcast

Transcript

A complaint you hear a lot these days is that social media is full of garbage and blatant lies. Russian fake news and Trump tweets, tabloid gossip and canceling strangers, conspiracies about the Sandy Hook shooting and Pizzagate and a million other things.

These unsavory hallmarks of our internet age have real, significant, lasting consequences in the real world. Pizzagate inspired one man to barge into a pizza store with an assault rifle, thinking he’d be freeing children from a sex trafficking ring run by Democrats in Washington. Russian disinformation has played no small role in dividing Western societies, fueling the extreme political movements plaguing America and Europe to this day.

The problem got so bad that, in 2018, the Senate called Mark Zuckerberg to testify. Could the man most influential in defining today’s social media defend the monster it’d become? In his opening statement, referring to Facebook as a “tool” for good and bad, he admitted to the company’s many failings. Quote:

“[I]t’s clear now that we didn’t do enough to prevent these tools from being used for harm, as well. And that goes for fake news, for foreign interference in elections, and hate speech, as well as developers and data privacy. We didn’t take a broad enough view of our responsibility, and that was a big mistake. And it was my mistake. And I’m sorry.”

It didn’t have to be this way. It wasn’t so long ago that we created the internet, after all. Back then–at a time nearly everyone listening to this was alive–the government was faced with a choice: make sure the internet is as clean and safe for everybody as possible, or just let everyone run wild. In hindsight, of course, you know which of those paths we chose.

Cubby v CompuServe

“[Ted] let me take you back before the internet.”

Ted Claypoole, Partner at Womble Bond Dickinson (US) LLP, was assistant general counsel to a company called CompuServe in the mid-90s.

“[Ted] CompuServe was basically founded by a group of insurance companies in Columbus, Ohio, who realized that they have these incredibly expensive computers that they were running between eight in the morning and six in the evening, and then were lying fallow between six in the evening and eight in the morning. And so they tried to figure out something that they could do with these computers to start making some money off of it. And they came up with the concept of allowing technically adept people to create their own communities online.”

This was the turn of the ‘80s, still a decade before commercial internet service providers would begin bringing the web to the masses. Yet even back then, CompuServe’s groundbreaking platform basically worked like online forums still do today.

“[Ted] all we’re allowing people to do is interact with each other online, right, to make posts, to have what they called news groups in which people have a certain affinity: you know, if you like to fix old muscle cars for example, you know, there would be an affinity of Usenet newsgroup for muscle cars. And you could go on and and interact with other people who like, had mechanical questions and like, do the same kind of things you did.”

It also featured CB Simulator, the first online chat ever made available to the wider public. So it’s no wonder why CompuServe’s initially small user base of just 1,000 subscribers grew past the six-figure mark in just a few years’ time (despite only a small percentage of American households owning computers in the first place). By the turn of the 90s–when the company founded one of the first proper commercial internet services–it was hosting around 600,000 users.

Among those 600,000 was a man named Robert Blanchard, and a company called Cubby Inc, which created “Skuttlebut,” a CompuServe newsletter for TV and radio news and gossip. And there was Don Fitzpatrick, who ran a similar newsletter called “Rumorville,” and wasn’t happy about his new competition. Fitzpatrick’s Rumorville started publishing things about Skuttlebut–that they stole information from Rumorville, that Blanchard had been “bounced” from his last radio job. The Skuttlebut project, it said, was a “new start-up scam.” Blanchard and Cubby sued over this libel, and they decided to also rope in CompuServe itself.

It wasn’t just that CompuServe was the public square in which Rumorville operated. The platform hosted a “Journalism Forum,” of which Rumorville was a part, and it contracted a second company to “manage, review, create, delete, edit, and otherwise control” that forum’s contents. So there was a real but distant connection between these parties. The question was: did this make CompuServe–the platform–liable for what Rumorville–the content creator–did?

“[Ted] So, you had this new technology, And Mr. Cubby was saying, Well, this is like a newspaper: They are publishing things online and other people are seeing it. Whereas the defense CompuServe said, No, we’re not like a newspaper. We’re more like a bulletin board in your local gym, where you can just stick things up and if the bulletin board has something offensive on it, the gym owners might not have read it or even know about it. And so they can’t be held liable for the things that are said unless you tell them about it.  And so the question was, is something like the internet more like the New York Times, or is it more like a bulletin board?”

It’s hard to overstate the significance of this question at this time. In what was essentially the first year of the public internet’s existence, Cubby v CompuServe would decide exactly what an online platform was, and therefore what its responsibilities were under the law.

“[Ted] the court looked at it and said – Well, you have a case against whoever said this stuff, because they’re the publisher of the case. But you don’t have a case against CompuServe, who in this case, really didn’t publish it and may not have known what was in there.”

CompuServe was not The New York Times, it was a bulletin board.

That judgment would seem to have solved the matter. Internet companies wouldn’t have to pore over everything everyone wrote on their sites. A reasonable conclusion, you’d have to say.

Except one important question was left unaddressed, and it soon became a problem for CompuServe’s biggest competitor.

Stratton Oakmont v Prodigy

Prodigy Services would tell you that it was the first ever consumer online service. Not CompuServe. Unlike CompuServe–which in its early years used a barebones command line interface–Prodigy sported a graphical user interface and basic architecture when it was founded in 1984 as a joint venture between CBS, IBM, and Sears.

In the decade that followed, Prodigy and CompuServe had fought neck-and-neck. By the time of the Cubby case in 1990, CompuServe had a clear lead–600,000 to 465,000–but by ‘93 Prodigy had pulled ahead. (Ultimately, the winner would end up being their other biggest competitor, America Online.)

More than its UI, Prodigy distinguished itself by being “family oriented.” Unlike that garbage dump CompuServe, Prodigy wouldn’t stand to allow content that your nana or child wouldn’t find acceptable.

Perhaps you can see where this is going.

“[Ted] Unlike Cubby, where CompuServe basically said, hands off, we let everybody post anything they want [. . .] prodigy handled things differently. Prodigy actually did more curating and monitoring of the information that went on to its site – and that was a substantial issue for the court.”

On October 23rd and 25th, 1994, an anonymous individual wrote on a Prodigy bulletin board called “Money Talk.” His posts concerned Stratton Oakmont, a Long Island-based securities investment banking firm.

Stratton Oakmont and its president, Daniel Porush, were committing a “major criminal fraud,” he wrote–a “100% criminal fraud,” in connection with a recent initial public offering of stock it had managed for a company called Solomon-Page. The firm was a “cult of brokers who either lie for a living or get fired,” and the president was “soon to be proven criminal.”

This case was a little different from Skuttlebut’s. Besides concerning a much more serious matter than online gossip, it happened to be the truth. Porush, along with Jordan Belfort–later memorialized as the “Wolf of Wall Street”–were indeed committing rampant criminal fraud. But the world didn’t know that yet, so Stratton Oakmont–in lieu of identifying the original poster–sued Prodigy for defamation.

And thus, Prodigy’s value proposition became its undoing. For years, they’d advertised just how deliberate they were about the content they allowed. There were guidelines, people in charge of enforcing them, even software which censored bad language. The company once wrote how, quote:

“We make no apology for pursuing a value system that reflects the culture of the millions of American families we aspire to serve. Certainly no responsible newspaper does less when it chooses the type of advertising it publishes, the letters it prints, the degree of nudity and unsupported gossip its editors tolerate.”

“[Ted] The court said, because you’re curating this, because you are telling people that you’re editing it –  We see you more as a publisher.”

The judge made sure to clarify his opinion. Quote, “Let it be clear that this Court is in full agreement with Cubby [. . .] Computer bulletin boards should generally be regarded in the same context as bookstores, libraries and network affiliates.” End quote.

However, he added, quote, “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice.”

“[Nate] what kind of effect did it have on internet companies that someone was held liable for something like this?

[Ted] They freaked out.”

The Cubby and Stratton Oakmont cases each made sense on their own. Together, however, they created a perverse incentive where internet platforms would be rewarded for being as negligent as possible.

“[Ted] You can avoid that liability by simply saying: hands off, I won’t do anything with the information that comes in. Whereas if I decide to massage it in any way, or edit it in any way or curate it in any way, it looks like I may be liable for all of it.”

What online company in its right mind would choose to moderate content if they could only be exposed to legal punishment for doing so? The judge in the Stratton case made a prediction. Quote:

“For the record, the fear that this Court’s finding of publisher status for PRODIGY will compel all computer networks to abdicate control of their bulletin boards, incorrectly presumes that the market will refuse to compensate a network for its increased control and the resulting increased exposure.”

In other words, internet users will prefer moderated online forums so much that it’ll make up for this extra legal hassle.

Strangely, the judge failed to account for a decade’s worth of data proving him wrong. Prodigy–censored–and CompuServe–uncensored–had been competing for a long time by this point, and for most of that time CompuServe was the more popular of the two. Clearly, content moderation was not a make or break thing for most users.

So if anybody expected internet platforms to moderate content ever again, something would have to be done to protect them.

Section 230

As it so happened, while the Prodigy case was being decided, in Washington, congress was drafting up its first major update to telecommunications law in more than 60 years. What would come to be known as the Telecommunications Act was first introduced in the Senate just six days after the Prodigy decision, in May of 1995, with the overall goal of broadening and deregulating America’s broadcasting and telecommunications industries, now including the internet.

Within the Telecommunications Act, a democratic senator added the Communications Decency Act, CDA, primarily designed to prevent children from being exposed to foul language and pornography on the web.

Within the CDA was one particular section of note: Section 230.

And within Section 230 were what are now commonly referred to as “the 26 words that created the internet.” Quote:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

“[Ted] Thanks to Section 230 CompuServe, AOL, Prodigy, and others who are providing online access could not be directly sued for information or defamation that was put on their services by a third party.”

It might sound like the government was trying to shield internet companies from the law, allowing them to be totally negligent regarding the content posted to their platforms. In fact, it was kind of the opposite. After Prodigy, only by shielding them from the law could the government hope to convince them to moderate their platforms at all.

In essence, the senate was telling internet companies: you’re clear of liability now, so please be nice and moderate the content posted to your platforms. The spirit of the law inspired its official, longer title: the, quote, “Protection For ‘Good Samaritan’ Blocking and Screening of Offensive Material.”

Zeran’s Story

On March 4th, 2011, some of the key figures responsible for the birth of Section 230 gathered for its 15th anniversary, at the High Tech Law Institute at Santa Clara University’s School of Law.

One of the most important speakers at the event was a slim, bald, but good-looking man, by then in his elder age. An artist and television producer from the northwest, he didn’t have any part in creating Section 230 all those years ago, but he was more important to its history than anyone besides arguably the two senators who wrote it. Not because he wanted to be.

He was roped in on April 25th, 1995. This was one month before the decision in Stratton Oakmont v. Prodigy Services, five weeks before the first draft of the Telecommunications Act was submitted to the senate, and six days following what was, at the time, the most deadly terror attack in US history: the bombing in Oklahoma City. In that incident, two men with a truck bomb outside of a federal building killed 168 people, including 19 children.

You can imagine, then, how people might have reacted when shortly thereafter, on an AOL bulletin board called “Michigan Military Movement,” an anonymous poster began advertising “Naughty Oklahoma” t-shirts and merchandise.

“Visit Oklahoma … It’s a BLAST!!”

“McVeigh for President 1996”

“Putting the kids to bed … Oklahoma 1995”

A name and number were listed for interested customers: “Ken,” and the home phone number of Kenneth Zeran, who didn’t know a thing about any of this until the barrage of angry and threatening calls started coming in.

Quickly after realizing what’d happened, Zeran contacted AOL to have the post removed. AOL abided the following day.

But later that same day came a second post from a slightly different username: KEN ZZ033. Already, the poster said, the t-shirts have sold out, and new slogans were now available. Slogans like “Forget the rescue, let the maggots take over — Oklahoma 1995”, and “Finally a day care center that keeps the kids quiet — Oklahoma 1995”. Just like the first time, the post listed Ken Zeran’s name and telephone. More posts like these continued for a week, and Zeran–who used his home phone for work as well–couldn’t do much to stop the barrage of abusive calls. Four or five days in, he was still receiving a call from a stranger every two minutes.

Surely this wasn’t how the internet was supposed to be. A year later Zeran decided to take a stand, so that nobody would have to go through what he did again. He sued AOL.

Zeran v AOL

Thanks to the timing of the case, AOL’s lawyers were able to present a novel argument to the court. Two months earlier, Section 230 was passed into law. Did it not clear AOL of the responsibility of policing content published to its sites by third parties?

“[Ted] His argument was, you knew about this, you should have known about this, and you should have taken it down. The Internet services argument was, well, we knew about it and we did take it down. And Zeran said – well, then you should have kept the same people from from being able to post more things online [. . .] And so he basically was saying this is your responsibility, online provider, to make sure that once you’ve taken these things down, that nobody can put more of them up again.”

In essence, Section 230 had made clear that AOL couldn’t be considered the ones to have published the lies about Ken Zeran. But surely, his lawyers argued, there was a limit. As content distributors, they were still responsible in some way or another to act to prevent violent lies from being spread.

In his deciding opinion, the conservative district court judge J. Harvie Wilkinson III explained that holding AOL to such a standard was a bridge too far. In a text which laid the groundwork for how Section 230 would be applied forever thereafter, he reasoned that, quote:

“If computer service providers were subject to distributor liability, they would face potential liability each time they receive notice of a potentially defamatory statement — from any party, concerning any message. Each notification would require a careful yet rapid investigation of the circumstances surrounding the posted information, a legal judgment concerning the information’s defamatory character, and an on-the-spot editorial decision whether to risk liability by allowing the continued publication of that information. Although this might be feasible for the traditional print publisher, the sheer number of postings on interactive computer services would create an impossible burden in the Internet context.“

Zeran’s logic would not merely force internet companies to stay incredibly on top of everyone’s posts all the time, Wilkinson thought, but would also create a new set of perverse incentives when it came to censoring bad content from good. Quote:

“Because service providers would be subject to liability only for the publication of information, and not for its removal, they would have a natural incentive simply to remove messages upon notification, whether the contents were defamatory or not. [. . .] Thus, like strict liability, liability upon notice has a chilling effect on the freedom of Internet speech.

Similarly, notice-based liability would deter service providers from regulating the dissemination of offensive material over their own services. Any efforts by a service provider to investigate and screen material posted on its service would only lead to notice of potentially defamatory material more frequently and thereby create a stronger basis for liability. Instead of subjecting themselves to further possible lawsuits, service providers would likely eschew any attempts at self regulation.”

“[Ted] It was a reasonable decision, looking at what the law says. And unfortunately, in some cases, bad things happen to good people and it’s really hard to find a place under the law that you can, you can do something about this.”

Debating 230

Even a decade and a half later, at the anniversary event in Santa Clara, California, Zeran held to his negative view of Section 230. To demonstrate why, he pointed to the law’s namesake: the Parable of the Good Samaritan, from the New Testament’s Book of Luke.

In that passage, Jesus tells of a traveler to Jericho who is stripped and beaten by robbers. Quote:

“A priest happened to be going down that road, but when he saw him, he passed by on the opposite side. Likewise a Levite came to the place, and when he saw him, he passed by on the opposite side. But a Samaritan traveler who came upon him was moved with compassion at the sight.”

After reciting the entire Parable, the artist posed a question to the crowd. Quote:

“Did Jesus mean that it is ok for the priest and Levite to either help the dying man or not? That they would not be liable of breaking the ‘new’ law by failing to respond to the dying man? Is that the lesson? [. . .] This is exactly what Section 230 says. One is NOT required to help the ‘dying man’. […] Section 230 has turned the parable and its own namesake upside down.”

At the end of the day, Zeran is right: besides certain rules for truly extreme content, Section 230 enables internet companies to, proverbially, ignore the dying man. It allows Facebook to run rampant Russian fake news. It allows Twitter to delete the profile of a leading presidential candidate, and X to allow it back. It allows crackpots and liars to spread lies about victims of school shootings, and, so long as he stays anonymous, it allows your neighbor to post whatever they want about you.

And yet, you might agree, Section 230 is the best of a lot of potentially bad ways to regulate something so big and unwieldy as the internet.

“[Ted] I think it basically gets it right because, like we said before, social media companies are not posting: they allow certain posts online, and then if they’re told that that’s a problem, they can take it down. So the bigger issues now are people that are saying that they don’t like the mediation habits and patterns of the social media companies – but keep in mind, these are companies. They are allowed to have their own rules. They’re allowed to say what they want on their own services.“

Proposed Changes to 230

For his part, Kenneth Zeran would not agree with this compromising view. In his 2011 speech, he tried to inspire the audience to action. Quote:

“Let us also remember what my friend, the great surrealist Salvador Dali once said- “Don’t worry about perfection – You’ll never obtain it!” But we can try! Isn’t this why we are all here today?”

Citing a less famous sentence within the Communications Decency Act–that, quote, “It is the policy of the United States to ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer,” end quote. 

Zeran proposed a brief amendment to Section 230. In his vision, the FBI and local law enforcement would take a greater role in policing harmful content online, and that, quote, “A provider or user of an interactive computer service shall be held liable on account of – failure to remove certain material upon notice by Federal Law Enforcement.” 

Others have suggested a variety of other possible changes. Some have argued that Section 230 could apply to nonprofits, and platforms that don’t exert any influence over content–like blogging platforms, and Wikipedia–but that once you start introducing profit and algorithms and all those things that make the internet nasty, you lose your protection.

A softer approach suggested by legal scholars would be to allow Section 230, but only for companies that, quote, “take reasonable steps to address unlawful uses of its service that clearly create serious harm to others.” End quote. As an article by Wired Magazine explained, this kind of proposal would introduce common law into the equation, and the opinions of judges who would be allowed the power to determine, for the rest of us, a reasonable standard to hold these companies to.

Whatever you think of such ideas, they’re unlikely to gain traction soon. That’s not because there’s a lack of interest in amending Section 230, though. In fact, the problem is quite the opposite.

Politicization of 230

“[Ted] There have been, oh gosh, probably between 30 and 50 acts introduced in Congress over the last 15 years or so, that would amend or otherwise, revise or eliminate section 230.”

The appetite to change or get rid of Section 230 has always existed, and it’s never been stronger than in recent years. And yet, any good ideas about how that might be done are likely to be drowned out by the more political reasons many have targeted this law.

“[Ted] The concept there is that a certain set of people on the political spectrum,”

Republicans.

“[Ted] feel like the algorithms and the policies of social media are discriminating against them, and that they shouldn’t be allowed to do that. Now Facebook and others would say, Well, we do discriminate against lying, for example. And there are things that we can show that where what you’re trying to get on there are simply not true.”

Some way into Zuckerberg’s Senate hearing–the one we mentioned at the opening of this episode–Senator Ted Cruz got the chance to question the social media tycoon on whether his apps have abused the law that allowed them to become so big in the first place.

Controversial though he may be, Cruz is a seasoned lawyer with experience arguing in front of the Supreme Court. Few people are more capable of asking the kinds of questions that might lead to real, long-term solutions. Instead, he asked the following. Quote:

“The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum. Do you consider yourself a neutral public forum, or are you engaged in political speech, which is your right under the First Amendment?”

Zuckerberg, well-prepared, failed to give Cruz the soundbyte he wanted–the one which would prove that Facebook targets Republicans, and promotes liberal speech. Instead of debating the merits of the law and its application to Facebook, Cruz devolved into more and more tabloid-style accusations. The CEO left the exchange largely unscathed.

We may have passed the point of considered, philosophical dialogue about Section 230. Now its fate will be determined as far as it is useful as a political weapon.

“[Ted] We’re in an election year. If in the next election, the Republicans sweep everything and they’ve got both houses in Congress and the presidency, then I think it’ll be a whole different question. In other words, section 230 may be something that goes because Mr. Trump had said that he will be the retribution for people who vote for him, a   d essentially eliminating section 230 is one of the ways the Republicans have looked to have retribution against social media that they feel has not treated them well.”