California State Senator Pushes Bill To Remove Anonymity From Anyone Who Is Influential Online

from the someone-buy-padilla-a-'constitutional-lawmaking-for-dummies'-book dept

What the fuck is wrong with state lawmakers?

It seems that across the country, they cannot help but to introduce the absolute craziest, obviously unconstitutional bullshit, and seem shocked when people suggest the bills are bad.

The latest comes from California state Senator Steve Padilla, who recently proposed a ridiculous bill, SB 1228, to end anonymity for “influential” accounts on social media. (I saw some people online confusing him with Alex Padilla, who is the US Senator from California, but they’re different people.)

This bill would require a large online platform, as defined, to seek to verify the name, telephone number, and email address of an influential user, as defined, by a means chosen by the large online platform and would require the platform to seek to verify the identity of a highly influential user, as defined, by asking to review the highly influential user’s government-issued identification.

This bill would require a large online platform to note on the profile page of an influential or highly influential user, in type at least as large and as visible as the user’s name, whether the user has been authenticated pursuant to those provisions, as prescribed, and would require the platform to attach to any post of an influential or highly influential user a notation that would be understood by a reasonable person as indicating that the user is authenticated or unauthenticated, as prescribed.

First off, this is unconstitutional. The First Amendment has been (rightly) read to protect anonymity in most cases — especially regarding election-related information. That’s the whole point of McIntyre v. Ohio. It’s difficult to know what Padilla is thinking, especially given his blatant admission that this bill seeks to target speech regarding elections. There are exceptions to the right to be anonymous, but they are limited to pretty specific scenarios. Cases like Dendrite lay out a pretty strict test for de-anonymizing a person (while limited as a precedent, but adopted in other courts), and it has to only be after a plaintiff demonstrates to a court that the underlying speech is actionable under the law. And not, as in this bill, because the speech is “influential.”

Padilla’s bill recognizes none of that, and almost gleefully makes it clear that he is either ignorant of the legal precedents here, or he doesn’t care. As he lays out in his own press release about the bill, he wants platforms to “authenticate” users because he’s worried about misinformation online about elections (again, that’s exactly what the McIntyre case said you can’t target this way).

“Foreign adversaries hope to harness new and powerful technology to misinform and divide America this election cycle,” said Senator Steve Padilla. “Bad actors and foreign bots now have the ability to create fake videos and images and spread lies to millions at the touch of a button. We need to ensure our content platforms protect against the kind of malicious interference that we know is possible. Verifying the identities of accounts with large followings allows us to weed out those that seek to corrupt our information stream.”

That’s an understandable concern, but an unconstitutional remedy. Anonymous speech, especially political speech, is a hallmark of American freedom. Hell, the very Constitution that this law violates was adopted, in part, due to “influential” anonymous pamphlets.

The bill is weird in other ways as well. It seems to be trying to attack both anonymous influential users and AI-generated content in the same bill, and does so sloppily. It defines “influential users” as someone who where

“Content authored, created, or shared by the user has been seen by more than 25,000 users over the lifetime of the accounts that they control or administer on the platform.”

This is odd on multiple levels. First, “over the lifetime of the account,” would mean a ridiculously large number of accounts will, at some point in the future, reach that threshold. Basically, you make ONE SINGLE viral post, and the social media site has to get your data and you can no longer be anonymous. Second, does Senator Padilla really think it’s wise to require social media sites to have to track “lifetime” views of content? Because that could be a bit of a privacy nightmare.

And then it adds in a weird AI component. This also counts as an “influential user”:

Accounts controlled or administered by the user have posted or sent more than 1,000 pieces of content, whether text, images, audio, or video, that are found to be 90 percent or more likely to contain content generated by artificial intelligence, as assessed by the platform using state-of-the-art tools and techniques for detecting AI-generated content.

So, first, posting 1,000 pieces of AI-generated content hardly makes an account “influential.” There are plenty of AI-posting bots that have little to no followings. Why should they have to be “verified” by platforms? Second, I have a real problem with the whole “if ‘state-of-the-art tools’ identify your content as mostly AI, then you lose your rights to anonymity,” when there’s zero explanation of why, or whether or not these “state-of-the-art tools” are even reliable (hint: they’re not!). Has Padilla run an analysis of these tools?

There are higher thresholds that designate someone as “highly influential”: 100,000 lifetime user views and 5,000 potentially AI-created pieces of content. Under these terms, I would be legally designated “highly influential” on a few platforms (my parents will be so proud). But then, “large online platforms” would be required to “verify” the “influential users’” identity, including the user’s name, phone number, and email, and would be required to “seek” government-issued IDs from “highly influential” users.

There is no fucking way I’m giving ExTwitter my government ID, but under the bill, Elon Musk would be required to ask me for it. No offense, Senator Padilla, but I’m taking the state of California to court for violating my rights long before I ever hand my driver’s license over to Elon Musk at your demand.

While the bill only says that the platforms “shall seek” this info, it would then require them to add a tag “at least as large and as visible as the user’s name” to their profile designating them “authenticated” or “unauthenticated.”

It would then further require that any site allow users to block all content from “unauthenticated influential or highly influential” users.

It even gets down to the level of product management, in that it tells “large online platforms” how it has to handle showing content from “unauthenticated” influential users:

(1) A large online platform shall attach to any post of an influential or highly influential user a notation that would be understood by a reasonable person as indicating that the user is authenticated or unauthenticated.

(2) For a post from an unauthenticated influential or highly influential user, the notation required by paragraph (1) shall be visible for at least two seconds before the rest of the post is visible and then shall remain visible with the post.

Again, there is so much problematic about this bill. Anyone who knows anything about anonymity would know this is so far beyond what the Constitution allows, that it should be an embarrassment for Senator Padilla, who should pull this bill.

And, on top of anything else, this would become a massive target for anyone who wants to identify anonymous users. Companies are going to get hit with a ton of subpoenas or other legal demands for information on people, which they’ll have collected, because someone had a post go viral.

Senator Padilla should be required to read Jeff Kosseff’s excellent book, “The United States of Anonymous,” as penance, and to publish a book report that details the many ways in which his bill is an unconstitutional attack on free speech and anonymity.

Yes, it’s reasonable to be concerned about manipulation and a flood of AI content. But, we don’t throw out basic constitutional principles based on such concerns. Tragically, Senator Padilla failed at this basic test of constitutional civics.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “California State Senator Pushes Bill To Remove Anonymity From Anyone Who Is Influential Online”

Subscribe: RSS Leave a comment
38 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

I am not an expert by any means but this sounds very much like compelled speech.

(1) A large online platform shall attach to any post of an influential or highly influential user a notation that would be understood by a reasonable person as indicating that the user is authenticated or unauthenticated.

(2) For a post from an unauthenticated influential or highly influential user, the notation required by paragraph (1) shall be visible for at least two seconds before the rest of the post is visible and then shall remain visible with the post.

This comment has been deemed insightful by the community.
Cat_Daddy (profile) says:

Mr. Padilla, I have a question…

Does… does he not know that some of those users of 25,000 engagements (whatever that means) are protesters, activists, journalists, people of minorities, politicians, LGBTQ people, and many, many other communities? Does he even know that doxxing is a thing? Doesn’t he know that that is extremely dangerous? Mr. Padilla is spearheading a bill that would coerce online companies to essentially spy on these individuals against their consent. That is messed up.

Michael Barclay (profile) says:

Let's not dox Jack Smith or Taylor Swift (Computer Security's version)

It would be a shame to dox two of my favorite eX-Twitter accounts. Jack E. Smith (@7Veritas4) is a political parody account of the real Jack Smith prosecutor, who provides excellent news and commentary about a certain former President’s legal problems. The anonymity of this account permits political messaging that wouldn’t be as effective from an identified person.

Swift on Security (@SwiftonSecurity) started by pretending to be Taylor Swift’s day job–computer helpdesk and security–in case her musical gig somehow didn’t work out. While eX-Twitter’s subsequently-imposed impersonation rules ended the “day job” pretense, this account remains a valuable resource for computer security news (especially zero days) and occasional Taylor Swift humor.

ke9tv (profile) says:

State legislators keep floating bills like this, in hopes that someday one of them will reach a Supreme Court with six Republican appointees, five of whom support policies resembling those of the Fascisti of the 1930’s, and that some decision will issue that reinterprets away the First Amendment. I suspect that the decision will hinge on the fact that the ‘last mile’ of the Internet exploits the public right-of-way, and therefore the government might purportedly have a right to regulate the content that travels over it, much as it has a right to regulate the content that travels over the public airwaves. Because the corporations that own the Court really, really want to return to the model where freedom of the press (which, of course, belongs to the one who owns the printing press) is prohibitively expensive for the peasants, and we’ll return to listening to the speech of our betters.

Thad (profile) says:

Re:

State legislators keep floating bills like this, in hopes that someday one of them will reach a Supreme Court with six Republican appointees, five of whom support policies resembling those of the Fascisti of the 1930’s, and that some decision will issue that reinterprets away the First Amendment.

I…somehow don’t think Steve Padilla (D-CA) sees it in precisely those terms.

This comment has been deemed insightful by the community.
Anonymous Coward says:

There are even more problems with this

1. Since it defines thresholds for views, it’d be trivially easy to program a cloud-based clickfarm to push the view totals as high as necessary to trigger the provisions of this bill. Which means it’d be rather cheap to push pretty much EVERYONE over the threshold.

1A. What stops social media companies from deliberately inflating views so that they’re “forced” to deanonymize certain users? (Keep in mind that we have sociopathic thugs like Zuckerberg, Musk, and Dorsey involved here. Expecting integrity from them or anything they touch is utterly foolish.)

2. Data brokers will have this information minutes after the social media companies have it.

3. The casual presumption that reliably detecting AI content is a solved problem in computing is laughably wrong.

4. Botnet operators (like the IRA) will have no difficulty evading this because they can create/use/delete tens of millions of accounts, cycling through them continuously.

That One Guy (profile) says:

Re: What's good for the goose...

That was my thought as well. If regular users with enough followers deserve to be stripped of anonymity because they might pose a problem by misleading people and giving them bogus information then clearly politicians who have the ability to create and put into practice laws that impact entire states or countries deserve anonymity even less.

Be a politician or want to be one? Congrats, you are barred from any anonymous posting or account creation online, it’s with your real name or not at all.

Bloof (profile) says:

This is a deeply, deeply stupid idea and amounts to state mandated doxxing. Once the platforms have the information, you will get lawsuit after lawsuit to unmask them, and platforms like Twitter will just hand over all their information on dissidents to Saudi Arabia or the Modi regime in india as they’ve demonstrated they’re happy to just roll over to their demands.

NoahVail (profile) says:

Do Legislatures have offices that can vet bills for constitutional and other issues?

I don’t know the answer to this. Do state legislatures maintain an office to review bills to insure they don’t run afoul of constitutional and other legal issues?

Here I mean a staff of qualified personnel and an office that falls under the operations of the legislature and does not answer to any particular legislator.

That Anonymous Coward (profile) says:

While it’s been done before, perhaps it is time to start showing citizens what these bill will cause.

Pretty sure people confronted with demands to turn over their ID’s or not being treated like real users might express how they feel to the state lawmakers.

The truly sad thing is in todays atmosphere you’ll have people all for it and refuse to budge until they find out it applies to them to.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

Swatting is a crime, you know. And I’d bet that a swatting that occurs when the swatter and their victim live in different states is a federal offense. So if you want to be on the hook for an act that will absolutely put you in federal prison for several years, you go right ahead with your plan to call SWAT teams on government officials in states where you don’t live for no reason other than disagreement with their politics and policies.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Politicians aren't cops, basic knowledge of the job should be the baseline

Senator Padilla should be required to read Jeff Kosseff’s excellent book, “The United States of Anonymous,” as penance, and to publish a book report that details the many ways in which his bill is an unconstitutional attack on free speech and anonymity.

Stop. Giving. Politicians. The benefit of the doubt.

If a doctor suggested that a patient needed to balance their four humors because they were out of alignment you wouldn’t shake your finger and suggest that they just needed to read a book to understand why that’s a bad idea.

If a mechanic suggested that the reason a car isn’t working is because there are literal gremlins inside the engine messing things up you wouldn’t shrug your shoulders and dismiss it as them just having a bad day.

And when a politicians suggest blatantly unconstitutional laws you do not tisk tisk disapprovingly and act as though they had no way of knowing why what they were suggesting was a terrible idea.

Are there stupid politicians? Yes, absolutely.

Should any of them be given or allowed that as an excuse for their actions? Absolutely not.

Having basic knowledge of the profession you’ve chosen to enter is expected of every other job out there, politicians not only don’t deserve a pass they deserve to be held to higher standards given the amount of power their wield and how many people their actions can impact.

Anonymous Coward says:

It even gets down to the level of product management, in that it tells “large online platforms” how it has to handle showing content from “unauthenticated” influential users:

Mr Padilla, Mississippi called, and wants to complain about you using their governmental censorialness shtick.

There’s good news, though. Florida called to welcome you to the club.

bhull242 (profile) says:

One flaw here that I haven’t seen mentioned is that, sometimes, a (sometimes modified) copy of another post which doesn’t suggest it’s not the original goes viral, but, for whatever reason, the original doesn’t (few views and few shares). In this situation, the ones who posted and/or shared the copy will be “influential”, but the one who posted the original would not, so—best-case scenario—only the ones down the line would have their identities be authenticated.

As such, this doesn’t always address the source of the misinformation Padilla is so concerned with. This is a problem since the issue is often false information that was created by foreign agents and then spread by American rubes (I think that’s the term). In the given scenario, only the Americans would be verified, not the foreign agent, and the posts targeted wouldn’t even indicate that there was an original post in the first place, so we still would have no more reason than before to suspect foreign influence is involved; if anything, we’d have less reason since the apparent (but not truly) “original” would have been shown to not be from foreign agents.

There’s also the fact that all this really does is let people know if someone has been authenticated or not. Someone could be authenticated but have dual citizenship or something and be working on behalf of a foreign entity, and so this doesn’t necessarily help that much. Plus, it does nothing to even attempt to verify whether the originator was even in a position to know whatever it is they’re talking about.

I’m not even sure that AI being used to spread election misinformation, specifically, is even all that common, so the focus on AI doesn’t really make much sense, either.

Really, I have no idea how this is even supposed to be helpful in addressing the issue in the first place.

bhull242 (profile) says:

Speaking of detecting AI-generated content, it seems likely to me that doing so is fundamentally impossible to do with any useful accuracy.

See, the point of AI-generation in general is training computers to be able to mimic the way humans talk, write, interact with people, and/or produce images or videos. It’s our attempt to get computers to behave like humans. While it’s not perfect, as far as the program generating the content is concerned, it looks no different from human-generated content; otherwise, it wouldn’t produce it as output in the first place.

In order to detect the flaws, we would have to train an AI to be able to see the differences that humans can see between then-state-of-the-art AI-generated content and human-generated content. However, doing that would require that AI to be better at recognizing human-generated-content than the AIs generating content. It would be fairly trivial to incorporate such a detection tool into a generative AI to improve its output, which would then make its content undetectable by the aforementioned tool.

Let’s say, hypothetically, we were able to create a program that flawlessly detects whether or not any given piece of content was generated by any AI that was in use at that time this new program was released. That same data could then be used by the generating programs to produce more-human-like content than the previous generations of AI, and so they would be able to generate content that is undetectable by that detecting program. The detecting program would have to be upgraded to be able to detect the new generator programs’ outputs as such, and then that data could be used to allow the generating programs to produce even-more-human-like content, and the loop continues so long as there is any difference between human- and AI-generated content that could ever be detected accurately and reliably by computers.

Basically, the more accurate our tools become at detecting AI generation, the better generative programs will become at producing content that those tools cannot reliably detect. (The only exception would be for generative AIs that intentionally produce a hidden watermark as a sort of author’s signature in order to detect its own work or which produce other deliberately or incompetently introduced “errors”, and I am not convinced that it will ever be the case that every generative AI will do this, which would be required for the detection tools to have reasonable accuracy.)

Heck, at some point, it may reach the point where the tools are better than humans at detecting AI-generation, but it still wouldn’t be good enough.

And this is assuming the best-case scenario for the detection software. It may be the case that we will never be able to get computers to recognize the differences between AI- and human-generated content even half as well as humans can. Either way, detection algorithms will never be ahead of generative AIs for long (if ever), so they can never be reliable for long (if ever), either.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:42 Supreme Court Shrugs Off Opportunity To Overturn Fifth Circuit's Batshit Support Of Texas Drag Show Ban (62)
15:31 Hong Kong's Zero-Opposition Legislature Aims To Up Oppression With New 'National Security' Law (33)
09:30 5th Circuit Is Gonna 5th Circus: Declares Age Verification Perfectly Fine Under The First Amendment (95)
13:35 Missouri’s New Speech Police (67)
15:40 Florida Legislator Files Bill That Would Keep Killer Cops From Being Named And Shamed (38)
10:49 Fifth Circuit: Upon Further Review, Fuck The First Amendment (39)
13:35 City Of Los Angeles Files Another Lawsuit Against Recipient Of Cop Photos The LAPD Accidentally Released (5)
09:30 Sorry Appin, We’re Not Taking Down Our Article About Your Attempts To Silence Reporters (41)
10:47 After Inexplicably Allowing Unconstitutional Book Ban To Stay Alive For Six Months, The Fifth Circuit Finally Shuts It Down (23)
15:39 Judge Reminds Deputies They Can't Arrest Someone Just Because They Don't Like What Is Being Said (33)
13:24 Trump Has To Pay $392k For His NY Times SLAPP Suit (16)
10:43 Oklahoma Senator Thinks Journalists Need Licenses, Should Be Trained By PragerU (88)
11:05 Appeals Court: Ban On Religious Ads Is Unconstitutional Because It's Pretty Much Impossible To Define 'Religion' (35)
10:49 Colorado Journalist Says Fuck Prior Restraint, Dares Court To Keep Violating The 1st Amendment (35)
09:33 Free Speech Experts Realizing Just How Big A Free Speech Hypocrite Elon Is (55)
15:33 No Love For The Haters: Illinois Bans Book Bans (But Not Really) (38)
10:44 Because The Fifth Circuit Again Did Something Ridiculous, The Copia Institute Filed Yet Another Amicus Brief At SCOTUS (11)
12:59 Millions Of People Are Blocked By Pornhub Because Of Age Verification Laws (78)
10:59 Federal Court Says First Amendment Protects Engineers Who Offer Expert Testimony Without A License (17)
12:58 Sending Cops To Search Classrooms For Controversial Books Is Just Something We Do Now, I Guess (221)
09:31 Utah Finally Sued Over Its Obviously Unconstitutional Social Media ‘But Think Of The Kids!’ Law (47)
12:09 The EU’s Investigation Of ExTwitter Is Ridiculous & Censorial (37)
09:25 Media Matters Sues Texas AG Ken Paxton To Stop His Bogus, Censorial ‘Investigation’ (44)
09:25 Missouri AG Announces Bullshit Censorial Investigation Into Media Matters Over Its Speech (108)
09:27 Supporting Free Speech Means Supporting Victims Of SLAPP Suits, Even If You Disagree With The Speakers (74)
15:19 State Of Iowa Sued By Pretty Much Everyone After Codifying Hatred With A LGBTQ-Targeting Book Ban (157)
13:54 Retiree Arrested For Criticizing Local Officials Will Have Her Case Heard By The Supreme Court (9)
12:04 Judge Says Montana’s TikTok Ban Is Obviously Unconstitutional (4)
09:27 Congrats To Elon Musk: I Didn’t Think You Had It In You To File A Lawsuit This Stupid. But, You Crazy Bastard, You Did It! (151)
12:18 If You Kill Two People In A Car Crash, You Shouldn’t Then Sue Their Relatives For Emailing Your University About What You Did (47)
More arrow