Do The New SEC Rules On Linking Violate Section 230 Safe Harbors?

from the maybe-so... dept

Eric Goldman has submitted comments to the SEC explaining how its recent guidelines concerning the liability for companies based on links on their websites most likely violates the safe harbor provisions in Section 230 of the CDA. Section 230, as we’ve discussed over and over again, provides a clear safe harbor to protect third parties from liability for the actions of others. While we personally think it should just be common sense that third parties shouldn’t be liable for the actions of others, it’s been clear for way too long that common sense isn’t really all that common.

In this case, the SEC indicated that some companies may be liable for content on third party sites that they link to, if the link gives the impression that the company has approved or endorsed the info. As Goldman points out, this appears to be in violation of the safe harbors, as no one should be liable for content they have no control over — even if they indicate they might endorse that content.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Do The New SEC Rules On Linking Violate Section 230 Safe Harbors?”

Subscribe: RSS Leave a comment
17 Comments
Twinrova says:

I'm still not sure about Safe Harbors, but I see its benefits.

I’ve always viewed SH as a law which allowed the internet to grow, but now that it’s all grown up, it has outlived its original purpose.

I know many who read this will definitely challenge my opinion, but I do believe websites should be held responsible for the actions of their users.

Turning a blind eye to what users do on websites is no longer a valid argument when it’s quite clear there is intentional “harm” being done.

Debaters, re-read the last paragraph before you challenge my opinion.

The internet has grown to take on more than its original intended use. That’s a great thing, of course, but now there’s a caveat within this growth: intentional harm. A recent blog was posted regarding the removal of prostitution ads on Craigslist. Safe Harbors allowed Craigslist to turn a blind eye, but this shouldn’t be the case.

There was intentional harm present and Craigslist should have removed these ads long before they became a public issue. While some of these ads would change to “massage” ads, this is different because this ad isn’t intentional harm as there would be no way to determine this. But a sex ad? Inexcusable.

It would be no different than if someone posted an ad on this site for child sex. It should be Techdirt’s responsibility to remove this message and shouldn’t be allowed to claim “Safe Harbor” should they be caught in leaving it.

This responsibility of businesses to turn away from known intentional harm is wrong. Because of Safe Harbors, it’s almost impossible to go anywhere having user comments and not see how destructive communication has become.

Even on this site, users take it upon themselves to intentionally harm others simply because they disagree. Calling someone an asshole just because they disagree with you is intentional harming.

“But they’re just words!”, you argue. Not true. These types of comments perpetuate a growing anger within people, and many are starting to take this anger into the “real” world. How often do you hear of stories (growing!) about people being killed over what’s been done online? Wasn’t there a recent story about a fake MySpace page?

MySpace should easily be held accountable for allowing intentional harm to continue. The page should have been removed, the users banned, and maybe someone survives the outcome. But falling behind the walls of Safe Harbor only means MySpace continues as normal as further intentional harm continues with absolutely no one taking “responsibility”.

It’s only when this intentional harm blows into the real world does it finally get “attention”, even though it’s short lived.

Before I go, keep this in mind: Imagine, for just a second, what this country would be like if Safe Harbors were to be given to parents over the actions of their children.

Now explain why it should be different for the virtual world. The website is the parent. Everything else is the child.

I’ll keep an open mind to the replies I’m sure to get here, but keep in mind I’m not against Safe Harbors, but I do believe it’s more of an excuse than it is protection.

Duane (profile) says:

Re: I'm still not sure about Safe Harbors, but I see its benefits.

You are wrong on every single aspect of your comment.

Web sites should not be held responsible for the actions of their users because they can’t control them and they have no idea what actions users should be held responsible for.

For example, this is a blog that I’m sure is read internationally. If I were to invite you to attend a white power meeting, in America that would just be in poor taste. In Germany, I wager the government would be a lot more interested in that statement and might even consider looking me up. How does TechDirt define responsibility in that instance?

A sex ad is “intentional harm?” Prove it. Moreover, why should Craigslist need to make that determination? What’s next, fraudulent listings? bitter diatribes against ex-boyfriends? I’d say those are more intentionally harmful and Craigslist has just about as much chance of being able to determine those as a “sex ad.” Especially given that Craigslist also includes parodies and other instances of humor. How do you tell the difference in a programmatic manner?

Commenting seems to lead to a lively debate on many of the forums I frequent and although there are always trolls, as long as you’re adding to the debate, I’m willing to put up with a little name calling. Sticks and stones and all that.

And finally, your parent-child analogy is ill-conceived, inadequate and inappropriate. TechDirt is not my daddy and it isn’t responsible for my actions.

It has no way to even identify what actions it needs to control because it has no idea where I live, where anyone who reads this might live or what direction the political wind is blowing in both locations. That is why you need the Safe Harbor, because it protects sites from people like you who would require omniscience from organizations that are sometimes 2 or 3 guys getting together to discuss what they love on the weekends.

ScytheNoire (profile) says:

Re: I'm still not sure about Safe Harbors, but I see its benefits.

Twinrova couldn’t be more wrong.

Safe Harbors are very important and more are needed. Whenever you put limits on anything, you destroy the potential for progress.

If we didn’t have things like patents and copyright, we’d be so much farther along in our advancement it would make today look very primitive.

We need more freedom, not less. Safe Harbors are needed to stop those who want to violate freedom.

Without protection on things like linking in websites, you’d basically destroy off the entire search engine market. You’d destroy off the entire internet. It’s useful as a communication tool would be gone.

Once again, it’s about those with power and money want to control everything.

There needs to be a revolution against the government and corporate control.

Mike (profile) says:

Re: I'm still not sure about Safe Harbors, but I see its benefits.

I’ve always viewed SH as a law which allowed the internet to grow, but now that it’s all grown up, it has outlived its original purpose.

That’s incorrect. Safe harbors are designed to *properly* assign liability. If someone is getting blamed for a crime they did not do, that’s a problem.

It troubles me that you would think it’s fair to blame someone for something they did not do.

Turning a blind eye to what users do on websites is no longer a valid argument when it’s quite clear there is intentional “harm” being done.

You clearly do not understand safe harbors, and I’m guessing this explains your odd opinion here. Safe Harbors do not say turn a blind eye. In fact, with at least the DMCA SHs they do require action once you’ve been alerted. But what SHs DO say is that you shouldn’t be forced to proactively police. And that makes perfect sense, because how is the service provider to know what is and what is not legit.

How is the service provider supposed to review every bit of content and use? They can’t. It’s simply impossible.

A recent blog was posted regarding the removal of prostitution ads on Craigslist. Safe Harbors allowed Craigslist to turn a blind eye, but this shouldn’t be the case. There was intentional harm present and Craigslist should have removed these ads long before they became a public issue. While some of these ads would change to “massage” ads, this is different because this ad isn’t intentional harm as there would be no way to determine this. But a sex ad? Inexcusable.

Again, showing your ignorance. Craigslist DID remove the pure sex ads. So your entire argument is incorrect.

The problem was that state AGs said that Craigslist had to block the ads where it wasn’t clear if they were sex ads or not.

Even on this site, users take it upon themselves to intentionally harm others simply because they disagree. Calling someone an asshole just because they disagree with you is intentional harming.

No, actually, it’s not. And I don’t care if you think I’m an asshole for saying it.

“But they’re just words!”, you argue. Not true.

Actually, very true.

These types of comments perpetuate a growing anger within people, and many are starting to take this anger into the “real” world. How often do you hear of stories (growing!) about people being killed over what’s been done online? Wasn’t there a recent story about a fake MySpace page?

And if you bothered to actually understand that case, you’d recognize how ridiculous these claims are. You are trying to blame a website for comments made by a third party that made a fourth party do something?

Yikes. I don’t want to live in a country like that. I don’t think you do either.

Coolhand says:

Beware Unintentional Consequences

It is all well and good to say a Web site should be responsible for what they endorse or link to, but with the proliferation of links and the rapidity with which the linked content can be changed it would be impossible for any Web site administrator to constantly monitor all content of all links looking for “objectionable” (as defined by whom?) material. The Safe Harbor protections have not outlived their usefulness any more than the first amendment has outlived its usefulness.
Also the comparison to Craigslist ads is different because that is content on their Web site, not content linked to on another site outside their control.

mkvf says:

Goldman’s earlier post on the same issue touches on why the SEC might have issued the guidance it has, and why the argument isn’t as simple as some here are putting it:

This proposal raises an even broader issue how 47 USC 230 overlays on securities laws generally. I can’t really think of a defendant who has litigated 230 as a defense to securities fraud claims, but it seems like a tenable defense for any online securities marketing done by third parties–a result which might wreak havoc on existing secondary doctrines of civil liability for securities fraud.

It’s possible to think of situations why pump and dump scammers might use links to fake analysis, posted elsewhere, to give the impression that a company they control is a good investment. You could, for example, use discussions on Yahoo or Google’s finance pages, or a site like Motley Fool to achieve that, or even set up fake investment blogs.

Maybe saying that safe harbor provisions don’t apply to corporate websites is a bad idea, and maybe it contradicts S230. However, it doesn’t seem like a bad idea to do something to protect investors against scams. People should check their sources before they make investments, but clearly not everyone does.

Maybe a better way to tackle that problem would be make clear that corporations have to avoid any appearance of collusion with or manipulation of fake analysis. That would be an equally difficult line to draw though.

The simplest approach would be to say traded companies should never carry, or link to, third party analysis of their own business or market.

mkvf says:

I wouldn’t think it a great loss to the world if they weren’t allowed to. Nor really the best example to pick though, considering how notorious pharma advertising and marketting campaigns are.

If you don’t ban external links, and financial regulators aren’t allowed to hold companies responsible to some extent for checking the accuracy of their content, how do you best prevent the sort of fraud I’ve suggested? If you take out fake ‘grassroots’ lobbying and minimise manipulation of medical data at the same time, isn’t that a good thing?

Rekrul says:

I know many who read this will definitely challenge my opinion, but I do believe websites should be held responsible for the actions of their users.

So if you organize a fundraiser, which I attend, and I offend some of the other attendees, it’s your fault that I’m a jerk? How about if I get in an argument and punch one of them, should they then sue you?

Turning a blind eye to what users do on websites is no longer a valid argument when it’s quite clear there is intentional “harm” being done.

Continuing with the above analogy, you would consider it a legal requirement that you personally approve and moderate every conversation at that event to ensure that nobody says anything that offends anyone else?

The internet has grown to take on more than its original intended use. That’s a great thing, of course, but now there’s a caveat within this growth: intentional harm. A recent blog was posted regarding the removal of prostitution ads on Craigslist. Safe Harbors allowed Craigslist to turn a blind eye, but this shouldn’t be the case.

There was intentional harm present and Craigslist should have removed these ads long before they became a public issue. While some of these ads would change to “massage” ads, this is different because this ad isn’t intentional harm as there would be no way to determine this. But a sex ad? Inexcusable.

First of all, I’m baffled at why prostitution is still a crime in the US. Almost all of the arguments that people raise against it, such as the spread of disease, abusive pimps getting girls hooked on drugs, women forced to sell their bodies on street corners, women getting beaten up or killed by their customers, etc, are the direct result of it being illegal. You don’t hear of any of this happening in places where prostitution is legal, like the brothels in Nevada, or other countries.

If it were legal, the women who wished to make a living this way could do so in a safe enviroment, they would be regulated and taxed like any other worker, and there would be rules such as no unprotected sex and monthly checkups to curtail STDs. This would make sex for hire more common, but I don’t see that as a bad thing. Maybe it would cut down on date rapes if frustrated guys could just pay a legal call when the woman they’re with says “no”.

Of course the feminists, bible-thumpers and everyone else with a stick up their ass would scream bloody murder about how legalizing prostitution would destroy the moral fabric of the country and everything would descend into chaos with hookers in lingerie hanging out in front of schools and luring men out of otherwise rock-solid marriages, but that would be the extent of their arguments against it. The truth is that the laws against prostitution are outdated and should be repealed. Other countries have done it and they haven’t self-destructed yet. Of course some countries have always had a casual attitude toward sex and nudity that would have most of the uptight prudes in America foaming at the mouth.

It would be no different than if someone posted an ad on this site for child sex. It should be Techdirt’s responsibility to remove this message and shouldn’t be allowed to claim “Safe Harbor” should they be caught in leaving it.

Two points;

1. There’s a difference in advertising sex for hire from an adult woman who has chosen to work in that profession and advertising to sell sex with a child who probably doesn’t have any choice in the matter.

2. Techdirt would remove such an ad if they became aware of it. Do you propose that they be sued because they didn’t personally verify each and every post before it’s made? How about this; If you’re so concerned with making sure that objectionable content isn’t posted by the users, how about if you volunteer to personally screen each and every post made to this site? I’m sure they don’t get more than 200-300 posts a day. That should be easy for you to do, right? Make sure you’re available 24 hours a day so that you can moderate the comments within a reasonable length of time.

“But they’re just words!”, you argue. Not true. These types of comments perpetuate a growing anger within people, and many are starting to take this anger into the “real” world. How often do you hear of stories (growing!) about people being killed over what’s been done online? Wasn’t there a recent story about a fake MySpace page?

How often do you hear about people killing each other over trivial stuff that happens in the real world. It’s a symptom of a much bigger problem, not a direct result of it. For the last 50 years or so, “experts” have been pushing the idea that any form of child discipline that the child dislikes should be considered cruel. From spanking, to yelling, to confining them to their room. Several places have made spanking or any form of physical punishment into a crime. Children today aren’t afraid of breaking the rules because they know that their parents really have no power to punish them. This has resulted in a generation with little to no respect for authority and for many, a slim grasp of right and wrong.

Have you ever watched any of those talkshows with titles like “Help! My teenager is out of control!” where the kids call their parents assholes and say they’ll do whatever they want? Look at them and then tell me that you think the real problem is “bad words” on the internet.

MySpace should easily be held accountable for allowing intentional harm to continue. The page should have been removed, the users banned, and maybe someone survives the outcome. But falling behind the walls of Safe Harbor only means MySpace continues as normal as further intentional harm continues with absolutely no one taking “responsibility”.

How exactly was MySpace supposed to know that the users were harassing a girl. There’s no mention that anyone involved reported the abuse to MySpace. With millions of users, how are they supposed to keep tabs on every single one of them? Even if they were able to, do you really want web sites like MySpace spying on every single thing people do on the site and censoring anything it thinks might be potentially a problem. Should Techdirt have censored your post because your opinions might offend someone?

Before I go, keep this in mind: Imagine, for just a second, what this country would be like if Safe Harbors were to be given to parents over the actions of their children.

Parents aready have safe harbor protection for what their children do. They’re only responsible if they either encourage their children to do something illegal, or if they’re made aware of their child’s activities and do nothing to stop it.

By your logic the parents of the Columbine killers should stand trial for multiple murders because their sons were a pair of losers who went psycho. Is that the kind of justice system you want?

Twinrova says:

So many replies...

Since so many real world examples were used, let me give you one:
You’re in Walmart and are shopping when you accidentally bump a shelf, causing its contents to spill onto you, breaking both your legs.

Who is responsible here? You, Walmart or the shelving company?

The answer will only be found once INTENTION has been established, as is with any real world issues.
Did Walmart INTETIONALLY overload the shelf, causing it to fail?
Did the shelving company INTENTIONALLY ship with an inaccurate weight limit?
Did you INTENTIONALLY run into the self?
Did Walmart INTENTIONALLY fail to inspect its shelving for issues?

Using more real world examples as given above:
Someone asked if the street dept should be held responsible for its street being used in a street fight. Of course not, because there was NO INTENTION on the street dept.
HOWEVER: If the street dept asked those persons to fight on their street, then yes, they are INTENTIONAL in their request to using the street as a weapon.

Someone asked if the parents should be held responsible for the Columbine duo for their actions. Of course not, because they was NO INTENTION on the parents’ part.
HOWEVER: If the parents knew of their actions, then they are INTENTIONAL in the outcome of Columbine (even if they didn’t know the exact outcome).

Someone asked if it should be my fault if my fundraiser turned sour because someone was a jerk and hit someone. Of course not, because there was no INTENTION of inviting this person for the purpose of hitting another.
HOWEVER: If I knew this person would hit someone, then I was INTENTIONAL in allowing another to be hit and should be held responsible.

We see it every day where “finger pointing” takes more precedence than responsibility simply because intention is not yet discovered.

Another example: Video games are being blamed for the violence of the world. This shouldn’t surprise anyone because we KNOW where the intention is, it’s just a failed excuse to blame another because the INTENTION of the person hasn’t been established.

It’s much easier for media to blame video games than it is to say “It’s not known why the person poisoned their grandmother.” By using video games as an excuse, you know idiots out there are going to tune in because of fear. “Will my child poison me? I’ll ban my child from playing the same game!”

Mike pointed out it is impossible to screen every submitted content, and I do agree. It is impossible for every, but certainly very possible for a few.

Example: Say Techdirt decides to add filters to prevent foul language. Words like “asshole” are now blocked. Does this mean ALL foul language is banned? Of course not, because there will be people out there who add things like “a$$hole”, which circumvents the filter.

This is where Safe Harbors comes in, because the site DID take measures to stop INTENTION to allow foul language. Once the intention is circumvented, intention is no longer an issue and the responsibility shifts to the user.

This is why damn near every website has Terms of Service. It’s their “safety net”, if you will. But sites who do nothing to administer the ToS as best it can should be held liable.

This, in a real world example, would be equivalent to having a toxic dumping company ToS state “Your drinking water is safe to drink and we’re not responsible for anything found in the water.” but did nothing to prevent dumping its toxic waste into the ground water!

My reason for holding MySpace liable is due to the fact they took NO MEASURES to enforce their ToS by validating the user is saying who they are. If they had at least tried, then I can clearly state MySpace isn’t liable.

This is why the lady who created the space is in trouble. Not for what she wrote, but how she went about doing it. And again, it’s because MySpace took no action to verify this information that allows the computer fraud charge to be warranted.

When most of you get upset this law is being used in this manner, but then try to convince me MySpace isn’t held liable, then don’t get upset when you see people trying to put the responsibility where it belongs as best they can WHILE trying to maintain Safe Harbors.

At any rate, I agreed to keep an open mind and I do believe Safe Harbors is necessary. But SH should be expelled the moment INTENTION, on the website, is clear.

I should have included the ToS issue on my original post, and I apologize for not doing so.

I know people will still disagree, and I’m okay with that.
I can see the need for SH. Believe me, I do. I just can’t fathom the notion it should be used as a shield 100% of the time when it’s blatantly obvious no attempt at enforcing the site’s purpose is visible.

Stacy Greenfield says:

Re: So many replies...

My reason for holding MySpace liable is due to the fact they took NO MEASURES to enforce their ToS by validating the user is saying who they are. If they had at least tried, then I can clearly state MySpace isn’t liable.

This is why the lady who created the space is in trouble. Not for what she wrote, but how she went about doing it. And again, it’s because MySpace took no action to verify this information that allows the computer fraud charge to be warranted.

ok, I’ll assuming the internet was to do thing your way (and I’ll admit to not agreeing with or liking your proposal) and all countries followed the exact same laws with regards to the internet (which will never happen) I’d like to know how you prove someone is who they say they are online?

an adult is easy, right? a credit card?

but what about a teenager? at 14 they are allowed to register to sites without parent permission and yet cannot get a state or federal issued ID. The regulations for different countries are different, and you can’t be 100% sure someone is from the country they say they are.

are you going to demand a phone number that you call? what about those who don’t have a phone?

can you even prove conclusively that my name is (or is not) Stacy Greenfield? there is no way to do so.


Example: Say Techdirt decides to add filters to prevent foul language. Words like “asshole” are now blocked. Does this mean ALL foul language is banned? Of course not, because there will be people out there who add things like “a$$hole”, which circumvents the filter.

that is silly, anti-profanity filters never work properly and only cause problems. for example you may say to ban the word “ass” because you got called one, but then you can’t talk about the animal, the car, the society, the subtitle script language, the album, or the gene. most anti-profanity filters will also ruin the words assassinate, assistance, assembly and everything else starting with (or containing) the letters “ass”. all that trouble because a word has one (mostly harmless) negative meaning that someone doesn’t like. never mind that what is offensive in one country isn’t offensive at all in others.

so I really would like to know how you would implement your ideas in a way that works because what you suggest so far is completely impracticable to implement.

DanC says:

Re: So many replies...

My reason for holding MySpace liable is due to the fact they took NO MEASURES to enforce their ToS by validating the user is saying who they are.

Enforcement of the MySpace terms of service doesn’t require the validation of registration information during account creation.

But SH should be expelled the moment INTENTION, on the website, is clear.

By my reading, the intention of the MySpace ToS is to act on a violation when either notified or noticed. Since the ToS quite clearly places the onus on providing truthful registration information on the user, I fail to see how MySpace can be held liable.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...