Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Bluesky at bsky.app/profile/mmasnick.bsky.social on Mastodon at mastodon.social/@mmasnick and still a little bit (but less and less) on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 24 April 2024 @ 03:39pm

Universal Music’s Copyright Claim: 99 Problems And Fair Use Ain’t One

Welp, sometimes you gotta read Techdirt fast, or you just might miss something. And sometimes it’s because Universal Music is acting to silence creativity yet again. Yesterday, we posted about how Dustin Ballard, the creative genius behind There I Ruined It, who makes very funny parody songs, had posted a lengthy disclaimer on his latest YouTube upload.

The video was the Beach Boys covering Jay-Z’s 99 Problems where every bit of it (minus the lyrics) sounded like a classic Beach Boys song. What made it interesting to us at Techdirt was the long and convoluted explanation that was included in the video to explain that the song is parody fair use, which is supposed to be allowed under copyright law.

Image

But, sometime after that story got posted, Universal Music stepped in and decided to ruin the fun, in the only way Universal Music knows how to act: by being a copyright bully where it has no need or right to be.

Image

Now, this is likely an automated copyright claim using ContentID or something similar, rather than a full DMCA takedown. But, it’s bullshit either way. Universal Music knows full well that it’s supposed to take fair use into account before issuing a copyright claim. Remember, Universal Music lost a lawsuit over its bogus copyright claims where it was told that it had to take fair use into account before sending such claims.

But, alas, none of that matters the way the system works today. It’s more important for YouTube to keep Universal Music happy rather than the content creators on YouTube or people who want to enjoy this music.

And, thus, as was discussed in the podcast we just uploaded, copyright remains a powerful tool of censorship.

I’m almost hesitant to point this out, for fear that some asshole at Universal Music will read this and continue on their warpath of culture destruction, but you can still hear versions of the Beach Boys doing 99 problems at both Instagram and TikTok (I mean, at least until TikTok is banned). The versions on those two sites are a bit shorter than the full YouTube version. They also cut off the copyright disclaimer such that it’s shorter.

But, really, this is yet another example of how totally broken the copyright system is. There is no conceivable reason for removing this. It’s not taking anything. It’s not making the Beach Boys or Jay-Z lose any money (and, ditto for Universal Music). If anything, it’s making people more interested in the underlying songs and artists (no one is interested in fucking Universal Music, though).

Fair use is supposed to be the valve by which the copyright system doesn’t violate the First Amendment. But when we see copyright wielded as a censorial weapon like this, with no real recourse for the artist, it should raise serious questions about why we allow copyright to act this way in the first place.

Posted on Techdirt - 24 April 2024 @ 12:05pm

Biden Signs TikTok Ban Bill; Expect A Lawsuit By The Time You Finish Reading This Article

Get your dance moves on now, as TikTok may be going away. Okay, it’s not going away that quickly and quite possibly isn’t going away at all, but President Biden signed the bill this morning that requires ByteDance to divest itself from TikTok, or have the app banned from the Apple and Google app stores.

The law gives ByteDance 12 months to divest, but in all likelihood sometime today or tomorrow, TikTok will file a well-prepared lawsuit with high-priced lawyers challenging the law on a variety of different grounds, including the First Amendment.

As you’ve probably heard, the bill was tacked on to a foreign aid funding bill, and there was no way the President wasn’t going to sign that bill. But even as ridiculous as it is to tack on a TikTok ban to foreign spending support, Biden had made it clear he supported the TikTok ban anyway. Still, it does seem notable that, when signing the bill, Biden didn’t even mention the TikTok ban in his remarks.

We’ve discussed this a few times before, but the move to ban TikTok is particularly stupid. It demonstrates American hypocrisy regarding its advocacy for an open internet. It goes against basic First Amendment principles. It overreacts to a basic moral panic. And it does fuck all to stop the actual threats that people justifying the ban talk about (surveillance and manipulation/propaganda).

It’s particularly stupid to do this now, just as Congress was finally willing to explore a comprehensive privacy bill.

The NY Times has a big article about the “behind the scenes negotiations” that resulted in this bill that (bizarrely) makes it sound like the TikTok bill is an example of Congress working well:

For nearly a year, lawmakers and some of their aides worked to write a version of the bill, concealing their efforts to avoid setting off TikTok’s lobbying might. To bulletproof the bill from expected legal challenges and persuade uncertain lawmakers, the group worked with the Justice Department and White House.

And the last stage — a race to the president’s desk that led some aides to nickname the bill the “Thunder Run” — played out in seven weeks from when it was publicly introduced, remarkably fast for Washington.

This leaves out some fairly important elements, including powerful lobbying by companies like Meta (who were clearly threatened by TikTok) to spread a moral panic about the app. It also leaves out the massive financial conflicts of many of the lawmakers who pushed for this bill.

Either way, the bill is going to get challenged and quickly. Previous attempts to ban TikTok (one by former President Trump and one by Montana) were both rejected as violations of the First Amendment.

While this bill is written more carefully to try to avoid that fate, it’s all a smokescreen, as the underlying concerns still very much implicate the First Amendment. The only real question is whether or not the outrage and moral panic about “CHINA CONTROLS THIS APP!!!!” will lead judges to make exceptions in this case.

The bill still has fundamental free speech problems. First of all, banning users from accessing content raises serious First Amendment questions. Second, requiring an app store to stop offering an app raises different First Amendment questions. Yes, there are cases when the US can force divestiture, but the remedies in this bill raise serious concerns and would create a very problematic precedent allowing future Presidents to effectively ban apps they dislike or possibly force their sale to “friends.” And that’s not even getting into what it does in terms of justifying censorship and app banning elsewhere.

Posted on Techdirt - 24 April 2024 @ 09:31am

FTC Bans Non-Competes, Sparks Instant Lawsuit: The War For Worker Freedom

This is a frustrating article to write. The FTC has come out with a very good and important policy ruling, but I’m not sure it has the authority to do so. The legal challenge (that was filed basically seconds after the rule came out) could do way more damage not just to some fundamental parts of the administrative state, but to the very underlying policy that the FTC is trying to enact: protecting the rights of workers to switch jobs and not be effectively tied to an employer in modern-day indentured servitude with no realistic ability to leave.

All the way back in 2007, I wrote about how non-competes were the DRM of human capital. They were an artificial manner of restricting a basic freedom, and one that served no real purpose other than to make everything worse. As I discussed in that post, multiple studies done over the previous couple of decades had more or less shown that non-competes are a tremendous drag on innovation, to the point that some argue (strongly, with data) that Silicon Valley would not be Silicon Valley if not for the fact that California has deemed non-competes unenforceable.

The evidence of non-competes being harmful to the market, to consumers, and to innovation is overwhelming. It’s not difficult to understand why. Studies have shown that locking up information tends to be harmful to innovation. The big, important, innovative breakthroughs happen when information flows freely throughout an industry, allowing different perspectives to be brought into the process. Over and over again, it’s been shown that those big breakthroughs come when information is shared and multiple companies are trying to tackle the underlying problem.

But you don’t want companies colluding. Instead, it’s much better to simply ban non-competes, as it allows workers to switch jobs. This allows for more of a free flow of information between companies, which contributes to important innovations, rather than stagnant groupthink. The non-competes act as a barrier to the free flow of information, which holds back innovation.

They’re really bad. It’s why I’ve long supported states following California’s lead in making them unenforceable.

And, of course, once more companies realized the DRM-ish nature of non-competes, they started using them for more and more evil purposes. This included, somewhat infamously, fast food workers being forced to sign non-competes. Whatever (weak) justification there might be for higher-end knowledge workers to sign non-competes, the idea of using them for low-end jobs is pure nonsense.

Non-competes should be banned.

But, when the FTC proposed banning non-competes last year, I saw it as a mixed bag. I 100% support the policy goal. Non-competes are actively harmful and should not be allowed. But (1) I’m not convinced the FTC actually has the authority to ban them across the board. That should be Congress’ job. And, (2) with the courts the way they are today, there’s a very high likelihood that any case challenging such an FTC rule would not just get tossed, but that the FTC may have its existing authority trimmed back even further.

Yesterday, the FTC issued its final rule on non-competes. The rule bans all new non-competes and voids most existing non-competes, with the one exception being existing non-competes for senior executives (those making over $151,164 and who are in “policy-making positions”).

The rule is 570 pages long, with much of it trying to make the argument for why the FTC actually has this authority. And all those arguments are going to be put to the test. Very shortly after the new rule dropped (long before anyone could have possibly read the 570 pages), a Texas-based tax services company, Ryan LLC, filed a lawsuit.

The timing, the location, and the lawyers all suggest this was clearly planned out. The case was filed in Northern Texas. It was not, as many people assumed, assigned to judicial shopping national injunction favorite Matthew Kacsmaryk. Instead, it went to Judge Ada Brown. The law firm filing the case is Gibson Dunn, which is one of the law firms you choose when you’re planning to go to the Supreme Court. One of the lawyers is Gene Scalia, son of late Supreme Court Justice Antonin Scalia.

Also notable, as pointed out by a lawyer on Bluesky, is that the General Counsel of Ryan LLC clerked for Samuel Alito (before Alito went to the Supreme Court) and is married to someone who clerked for both Justices Alito and Thomas. She also testified before the Senate in support of Justice Gorsuch during his nomination.

The actual lawsuit doesn’t just seek to block the rule. It is basically looking to destroy what limited authority the FTC has. The main crux of the argument is on more firm legal footing, claiming that this rule goes beyond the FTC’s rulemaking authority:

The Non-Compete Rule far exceeds the Commission’s authority under the FTC Act. The Commission’s claimed statutory authority—a provision allowing it “[f]rom time to time” to “classify corporations and . . . make rules and regulations,” 15 U.S.C. § 46(g)—authorizes only procedural rules, as the Commission itself recognized for decades. This is confirmed by, among other statutory features, Congress’s decision to adopt special procedures for the substantive rulemaking authority it did grant the Commission, for rules on “unfair or deceptive acts or practices.”

I wish this weren’t the case, because I do think non-competes should be banned, but this argument may be correct. Congress should make this decision, not the FTC.

However, the rest of the complaint is pretty far out there. It’s making a “major questions doctrine” argument here, which has become a recent favorite among the folks looking to tear down the administrative state. It’s not worth going deep on this, other than to say that this doctrine suggests that if an agency is claiming authority over “major questions,” it has to show that it has clear (and clearly articulated) authority to do so from Congress.

Is stopping the local Subway from banning sandwich makers from working at the McDonald’s down the street a “major question”? Well, the lawsuit insists that it is.

Moreover, even if Congress did grant the Commission authority to promulgate some substantive unfair-competition rules, it did not invest the Commission with authority to decide the major question of whether non-compete agreements are categorically unfair and anticompetitive, a matter affecting tens of millions of workers, millions of employers, and billions of dollars in economic productivity.

And then the complaint takes its big swing: the whole FTC is unconstitutionally structured.

Compounding the constitutional problems, the Commission itself is unconstitutionally structured because it is insulated from presidential oversight. The Constitution vests the Executive Power in the President, not the Commission or its Commissioners. Yet the FTC Act insulates the Commissioners from presidential control by restricting the President’s ability to remove them, shielding their actions from appropriate political accountability.

This is taking a direct shot at multiple parts of the administrative state, where Congress (for very good reasons!!!) set up some agencies to be independent agencies. They were set up to be independent to distance them from political pressure (and culture war nonsense). While the President can nominate commissioners or directors, they have limited power over how those independent agencies operate.

This lawsuit is basically attempting to say that all independent agencies are unconstitutional. This is one hell of a claim, and would do some pretty serious damage to the ability of the US government to function. Things that weren’t that political before would become political, and it would be a pretty big mess.

But that’s what Ryan LLC (or, really, the lawyers planning this all out) are gunning for.

The announcement that Ryan LLC put out is also… just ridiculous.

“For more than three decades, Ryan has served as a champion for empowering business leaders to reinvest the tax savings our firm has recovered to transform their businesses,” the firm’s Chairman and CEO, G. Brint Ryan, said in a statement.. “Just as Ryan ensures companies pay only the tax they owe, we stand firm in our commitment to serve the rightful interest of every company to retain its proprietary formulas for success taught in good faith to its own employees.

Um. That makes no sense. The FTC ruling does not outlaw NDAs or trade secret laws. Those are what protect “proprietary formulas.” So, the concern that Mr. Ryan is talking about here is wholly unrelated to the rule.

Last spring, Ryan “sought to dissuade” the FTC from imposing the new rule by submitting a 54-page public comment against it. In the comment, Ryan called non-compete agreements “an important tool for firms to protect their IP and foster innovation,” saying that without them, firms could hire away a competitor’s employees just to gain insights into their competitor’s intellectual property. Ryan added that the rule would inhibit firms from investing in that IP in the first place, “resulting in a less innovative economy.”

Again, almost everything said here is bullshit. They can still use NDAs (and IP laws) to protect their “IP.” That’s got nothing to do with non-competes.

As for the claim that it will result in “a less innovative economy,” I’ll just point to the fact that California remains the most innovative economy in the US and hasn’t allowed non-competes. Every single study on non-competes has shown that they hinder innovation. So Ryan LLC and its CEO are full of shit, but that shouldn’t be much of a surprise.

Anyway, this whole thing is a stupid mess. Non-competes should be banned because they’re awful and bad for innovation and employee freedom. But it should be Congress banning them, not the FTC. But, now that the FTC has moved forward with this rule, it’s facing an obviously planned out lawsuit, filed in the Northern District of Texas with friendly judges, and the 5th Circuit appeals court ready to bless any nonsense you can think of.

And, of course, it’s happening at a time when the Supreme Court majority has made it pretty clear that dismantling the entire administrative state is something it looks forward to doing. This means there’s a pretty clear path in the courts for the FTC to lose here, and lose big time. One hopes that if the courts are leaning in this direction, they would simply strike down this rule, rather than effectively striking down the FTC itself. But these days, who the fuck knows how these cases will go.

And even just on the issue of non-competes, my fear is that this effort sets back the entire momentum behind banning them. Assuming the courts strip the FTC rule, many will see it as open season on increasing non-competes, and the FTC would likely be stripped of any power to challenge the most egregious, anti-competitive ones.

Non-competes should be banned. But the end result of this rule could be that they end up being used more widely. And that would really suck.

Posted on Techdirt - 23 April 2024 @ 01:38pm

When You Need To Post A Lengthy Legal Disclaimer With Your Parody Song, You Know Copyright Is Broken

In a world where copyright law has run amok, even creating a silly parody song now requires a massive legal disclaimer to avoid getting sued. That’s the absurd reality we live in, as highlighted by the brilliant musical parody project “There I Ruined It.”

Musician Dustin Ballard creates hilarious videos, some of which reimagine popular songs in the style of wildly different artists, like Simon & Garfunkel singing “Baby Got Back” or the Beach Boys covering Jay-Z’s “99 Problems.” He appears to create the music himself, including singing the vocals, but uses an AI tool to adjust the vocal styles to match the artist he’s trying to parody. The results are comedic gold. However, Ballard felt the need to plaster his latest video with paragraphs of dense legalese just to avoid frivolous copyright strikes.

When our intellectual property system is so broken that it stifles obvious works of parody and creative expression, something has gone very wrong. Comedy and commentary are core parts of free speech, but overzealous copyright law is allowing corporations to censor first and ask questions later. And that’s no laughing matter.

If you haven’t yet watched the video above (and I promise you, it is totally worth it to watch), the last 15 seconds involve this long scrolling copyright disclaimer. It is apparently targeted at the likely mythical YouTube employee who might read it in assessing whether or not the song is protected speech under fair use.

Image

And here’s a transcript:

The preceding was a work of parody which comments on the perceived misogynistic lyrical similarities between artists of two different eras: the Beach Boys and Jay-Z (Shawn Corey Carter). In the United States, parody is protected by the First Amendment under the Fair Use exception, which is governed by the factors enumerated in section 107 of the Copyright Act. This doctrine provides an affirmative defense for unauthorized uses that would otherwise amount to copyright infringement. Parody aside, copyrights generally expire 95 years after publication, so if you are reading this in the 22nd century, please disregard.

Anyhoo, in the unlikely event that an actual YouTube employee sees this, I’d be happy to sit down over coffee and talk about parody law. In Campell v. Acuff-Rose Music Inc, for example, the U.S. Supreme Court allowed for 2 Live Crew to borrow from Roy Orbison’s “Pretty Woman” on grounds of parody. I would have loved to be a fly on the wall when the justices reviewed those filthy lyrics! All this to say, please spare me the trouble of attempting to dispute yet another frivolous copyright claim from my old pals at Universal Music Group, who continue to collect the majority of this channel’s revenue. You’re ruining parody for everyone.

In 2024, you shouldn’t need to have a law degree to post a humorous parody song.

But, that is the way of the world today. The combination of the DMCA’s “take this down or else” and YouTube’s willingness to cater to big entertainment companies with the way ContentID works allows bogus copyright claims to have a real impact in all sorts of awful ways.

We’ve said it before: copyright remains the one tool that allows for the censorship of content, but it’s supposed to only be applied to situations of actual infringement. But because Congress and the courts have decided that copyright is in some sort of weird First Amendment free zone, it allows for the removal of content before there is any adjudication of whether or not the content is actually infringing.

And that has been a real loss to culture. There’s a reason we have fair use. There’s a reason we allow people to create parodies. It’s because it adds to and improves our cultural heritage. The video above (assuming it’s still available) is an astoundingly wonderful cultural artifact. But it’s one that is greatly at risk due to abusive copyright claims.

Let’s also take this one step further. Tennessee just recently passed a new law, the ELVIS Act (Ensuring Likeness Voice and Image Security Act). This law expands the already problematic space of publicity rights based on a nonsense moral panic about AI and deepfakes. Because there’s an irrational (and mostly silly) fear of people taking the voice and likeness of musicians, this law broadly outlaws that.

While the ELVIS Act has an exemption for works deemed to be “fair use,” as with the rest of the discussion above, copyright law today seems to (incorrectly, in my opinion) take a “guilty until proven innocent” approach to copyright and fair use. That is, everything is set up to assume it’s infringing unless you can convince a court that it’s fair use, and that leads to all sorts of censorship.

So even if I think the video above is obviously fair use, if the Beach Boys decided to try to make use of the ELVIS Act to go after “There I Ruined It,” would it actually even be worth it for them to defend the case? Most likely not.

And thus, another important avenue and marker of culture gets shut down. All in the name of what? Some weird, overly censorial belief in “control” over cultural works that are supposed to be spread far and wide, because that’s how culture becomes culture.

I hope that Ballard is able to continue making these lovely parodies and that they are able to be shared freely and widely. But just the fact that he felt it necessary to add that long disclaimer at the end really highlights just how stupid copyright has become and how much it is limiting and distorting culture.

You shouldn’t need a legal disclaimer just to create culture.

Posted on Techdirt - 23 April 2024 @ 09:35am

Any Privacy Law Is Going To Require Some Compromise: Is APRA The Right Set Of Tradeoffs?

Privacy issues have been at the root cause of so many concerns about the internet, but so many attempts to regulate privacy have been a total mess. There’s now a more thoughtful attempt to regulate privacy in the US that is (perhaps surprisingly!) not terrible.

For a while now, we’ve talked about how many of the claims from politicians and the media about the supposed (and often exaggerated, but not wholly fictitious) concerns about the internet are really the kinds of concerns that could be dealt with by a comprehensive privacy bill that actually did the right things.

Concerns about TikTok, questionably targeted advertising, the sketchy selling of your driving records, and more… are really all issues related to data privacy. It’s something we’ve talked about for a while, but most efforts have been a mess, even as the issue has become more and more important.

Part of the problem is that we’re bad at regulating privacy because most people don’t understand privacy. I’ve said this multiple times in the past, but the instincts of many is that privacy should be regulated as if our data were our “property.” But that only leads to bad results. When we treat data as property, we create new, artificial, property rights laws, a la copyright. And if you’re reading Techdirt, you should already understand what kind of awful mess that can create.

Artificial property rights are a problematic approach to just about anything, and (most seriously) frequently interfere with free speech rights and create all sorts of downstream problems. We’ve already seen this in the EU with the GDPR, which has many good characteristics, but also has created some real speech problems, while also making sure that only the biggest companies can exist, which isn’t a result anyone should want.

Over the last few weeks, there’s been a fair bit of buzz about APRA, the American Privacy Rights Act. It was created after long, bipartisan, bicameral negotiations between two elected officials with very different views on privacy regulation: Senator Maria Cantwell and Rep. Cathy McMorris Rodgers. The two had fought in the past on approaches to privacy laws, yet they were able to come to an agreement on this one.

The bill is massive, which is part of the reason why we’ve been slow to write about it. I wanted to be able to read the whole thing and understand some of the nuances (and also to explore a lot of the commentary on it). If you want a shorter summary, the best, most comprehensive I’ve seen came from Perla Khattar at Tech Policy Press, who broke down the key parts of the bill.

The key parts of the bill are that it takes a “data minimization” approach. Covered companies need to make sure that the data they’re collecting is “necessary” and “proportionate” to what the service is providing. This means organizations making over $40 million a year, processing data on over 200,000 consumers, and that transfer covered data to third parties. If it’s determined that companies are collecting and/or sharing too much, they could face serious penalties.

Very big social media companies, dubbed “high impact social media companies,” that have over $3 billion in global revenue and $300 million or more global monthly active users, have additional rules.

I also greatly appreciate that the law explicitly calls out data brokers (often left out of other privacy bills, even though data brokers are often the real privacy problem) and requires them to take clear steps to be more transparent to users. The law also requires data minimization for those brokers, while prohibiting certain egregious activities.

I always have some concerns about laws that have size thresholds. It creates the risk of game playing and weird incentives. But of most bills in this area that I’ve seen, the thresholds in this one seem… mostly okay? Often the thresholds seem ridiculously low, covering small companies too readily in a way that would create massive compliance costs too early, or only target the very largest companies. This bill takes a more middle ground approach.

There are also a bunch of rules to make sure companies are doing more to protect data security, following best practices that are reasonable based on the size of the company. I’m always a little hesitant on things like that because whether or not a company took reasonable steps is often viewed through the lens of retrospect, after some awful breach occurs, when we realize how poorly someone actually secured their data, even if upfront it appeared secure. How this plays out in practice will matter.

The law is not perfect, but I’m actually coming around to the belief that it may be the best we’re going to get and has many good provisions. I know that many activist groups, including those I normally agree with, don’t like the bill for specific reasons, but I’m going to disagree with them on those reasons. We can look at EFF’s opposition as a representative example.

EFF is concerned that it does not like the state pre-emption provisions, and also wishes that the private right of action (allowing individuals to sue) would be stronger. I actually disagree on both points, though I think it’s important to explain why. These were two big sticking points over previous bills, but I think they were sticking points for a very good reason.

On state pre-emption: many people (and states!) want to be able to pass stricter privacy laws, and many activists support that. However, I think the only way a comprehensive federal privacy bill makes sense is if it pre-empts state privacy laws. Otherwise, companies have to comply with 50+ different state privacy laws, some of which are going to be (or already are) absolutely nutty. This would, again, play right into the hands of the biggest companies, that can afford to craft different policies for different states, or that can figure out ways to craft policies that comply with every state. But it would be deathly for many smaller companies.

Expecting state politicians to get this right is a big ask, given just how messed up attempts to regulate privacy have been over the last few years. Hell, just look at California, where we basically let some super rich dude with no experience in privacy law force the state into writing a truly ridiculously messed up privacy law (then make it worse before anything was even tested) and finally… give that same rich dude control over the enforcement of the law. That’s… not good.

It seems like the only workable way to do this without doing real harm to smaller companies is to have the federal government step in and say “here is the standard across the board.” I have seen some state officials upset about this, but the law still leaves the states’ enforcement powers on the more national standard.

That said, I’m still a bit wary about state enforcement. State AGs (in a bipartisan manner) have quite a history of doing enforcement actions for political purposes more than any legitimate reason. I do fear APRA giving state AGs another weapon to use disproportionately against organizations they simply dislike or have political disagreements with. We’ve seen it happen in other contexts, and we should be wary of it here.

As for the private right of action, again, I understand where folks like the EFF would like to see a broader private right of action. But we also know how this tends to work out in practice. Because of the ways in which attempts to stifle speech can be twisted and presented as “privacy rights” claims, we should be wary about handing too broad a tool for people to use, as we’ll start to see all sorts of vexatious lawsuits, claiming privacy rights, when they’re really an attempt to suppress information, or to simply attack companies someone doesn’t like.

I think APRA sets an appropriate balance in that it doesn’t do away with the private right of action entirely, but does limit how broadly it can be used. Specifically, it limits which parts of the law are covered by the private right of action in a manner that hopefully would avoid the kind of egregious, vexatious litigation that I’ve feared under other laws.

Beyond the states and the private right of action, the bill also sets up the FTC to be able to enforce the law, which will piss off some, but is probably better than just allowing states and private actors to be the enforcers.

I do have some concerns about some of the definitions in the bill being a bit vague and open to problematic interpretations and abuse on the enforcement side, but hopefully that can be clarified before this becomes law.

In the end, the APRA is certainly not perfect, but it seems like one of the better attempts I’ve seen to date at a comprehensive federal privacy bill and is at least a productive attempt at getting such a law on the books.

The bill does seem to be on something of a fast track, though there remain some points of contention. But I’m hopeful that, given the starting point of the bill, maybe it can reach a consensus that no one particularly likes, but which actually gets the US to finally level up on basic privacy protections.

Regulating privacy is inherently difficult, as noted. In an ideal world, we wouldn’t need regulations because we’d have services where our data is separate from the services we use (as envisioned in the protocols not platforms world) and thus more in our own control. But seeing as we still have plenty of platforms out there, the approach presented in APRA seems like a surprisingly good start.

That said, seeing how this kind of sausage gets made, I recognize that bills like this can switch from acceptable to deeply, deeply problematic overnight with small changes. We’ll certainly be watching for that possibility.

Posted on Techdirt - 22 April 2024 @ 01:35pm

Lawmakers Who Insisted The US Gov’t Should Never Combat Foreign Influence Online, Vote To Combat TikTok’s Foreign Influence Online

Is the US government allowed to step in to deal with foreign influence on social media or not? According to at least some members of Congress, the answer appears to be “yes, when we dislike what they’re saying, and no when we like what they’re saying.”

When the original House bill to “ban TikTok” passed, David Greene and Karen Gullo at EFF noted the odd contrast of dozens of Congressional Reps who both signed an amicus brief with the Supreme Court in the Murthy case, saying that the US government should simply never be allowed to interfere with speech, including to counter election misinformation, and also (just days later) voted to ban TikTok.

Over the weekend, the House once again passed a TikTok ban bill (similar to the original with a few small changes), which they bizarrely bundled with funding for Ukraine, Israel, and Taiwan. The bill passed the House 316 to 94, with the yeas and nays following no particular partisan breakdown.

Image

And, I recognize it’s not perfectly fair to see “yea” votes as a clear vote for banning TikTok, given that this was a bundle of (mostly) foreign aid bills that I’m sure some members saw as much more important than the TikTok ban question. However, it does seem notable that so many Members of Congress insisted to the Supreme Court that the US government should never interfere with foreign influence campaigns online, but then voted to ban TikTok, in large part because of the risk that it might try to run foreign influence campaigns online.

Looking through the roll call and comparing it to the signatures on the amicus brief, I find 13 members of the House who both think that it is clearly unconstitutional for the US to try to respond to foreign influence peddling, but who also believe that they could ban TikTok in response to concerns about foreign influence peddling.

The amicus brief is pretty clear on this point. It complains, specifically, about the FBI’s Foreign Influence Task Force. It suggests that it acted illegally in trying to respond to foreign influence peddling: “The federal government, specifically the FBI’s Foreign Influence Task Force (FITF), also used its power and influence to deceive and coerce social media companies.”

The amicus brief claims this is a clear First Amendment violation. From the brief:

Thus, the First Amendment stands against any governmental effort to coerce or otherwise burden the free speech of private entities— even if that action falls short of outright suppression.

And yet… when it comes to Reps. Jim Jordan, Elise Stefanik, Kelly Armstrong, Aaron Bean, Kat Cammack, Jerry Carl, Scott Fitzgerald, Russell Fry, Erin Houchin, Darrell Issa, Ronny Jackson, Max Miller, Guy Reschenthaler, and Claudia Tenney, apparently it’s only not okay for the government to burden the free speech of private entities when those entities are not connected to China. Then, suddenly, principles go out the window, and of course the government can do this.

After all, those Reps both signed the amicus brief and voted in favor of the TikTok ban. To be fair, this is a smaller number than those who voted for the original TikTok ban. However, that difference is mainly explainable by the fact that many of those who voted no here simply do not want to provide foreign aid to Ukraine.

Still, though, it would be nice if elected officials weren’t so openly hypocritical all the time. As the EFF post a couple months ago noted:

We believe there is an appropriate role for the government to play, within the bounds of the First Amendment, when it truly believes that there are posts designed to interfere with U.S. elections or undermine U.S. security on any social media platform. It is a far more appropriate role than banning a platform altogether.

Posted on Techdirt - 22 April 2024 @ 12:00pm

Jonathan Haidt’s Book ‘The Anxious Generation’ Is Coddling The American Parent; Giving Them Clear, Simple & Wrong Explanations For What’s Ailing Teens

Jonathan Haidt’s new book, “The Anxious Generation,” has become a NY Times bestseller, and he’s making media appearances basically everywhere you look, telling people that social media has “rewired children’s minds” and that it is uniquely harmful.

We’ve talked about Haidt in the past, and especially his ability to consistently cherry-pick and misread the actual data on such things.

Haidt is telling a story a lot of people want to hear. The world has some very real problems happening these days, and it’s clear, simple, and ultimately wrong to be able to blame them on social media. It’s just that, in getting the diagnosis wrong, it lets people feel better about themselves, while brushing away actual real problems and doing the hard work of trying to dig up solutions.

The Daily Beast asked if I would review the book for them, and they’ve now published over 2000 words I wrote about the myriad problems of Haidt’s book. I’m not going to repeat it here, so please go read it over there, as I put a lot of time into that review.

But, as a quick summary: he’s wrong on the data, which undermines his entire argument. Almost every single expert in the field who does actual research on these issues says so. Candice Odgers ripped apart his misleading use of data in Nature. Andrew Przybylski, who has done multiple, detailed studies using massive amounts of data going back years, and keeps finding little to no evidence of the things Haidt claims, has talked about the problems in Haidt’s data. Ditto Jeff Hancock, at Stanford, who recently helped put together the National Academies of Sciences report on social media and adolescent health (which also did not find what Haidt found).

Indeed, one thing that came up in looking over the “strongest” research in the book was that (contrary to some of Haidt’s claims), data outside of the US on suicide rates seem to show they’re often (not always) going down, not up. Even worse, the data on depression in the US showing an increase in depression rates among kids is almost certainly due to changes in screening practices for depression and how suicide ideation is recorded.

As my review notes, though, the problems with the data are only the very beginning of the problems with the book. Because, in the first part of the book, Haidt misleadingly throws around all the data, but in the latter part, he focuses on his policy recommendations and basically comes up with a bunch of very silly ideas that have no data to back them up:

He suggests raising the age at which kids can use certain websites from 13 to 16. Why 16? Based on his gut. He literally says he “thinks” age 16 “was the right one for the minimum age,” but presents no research or data to explain why. He notes that at that age they’re mature enough to handle the internet, though he doesn’t explain why.

And why suggest limiting access to age 16, rather than teaching kids digital literacy and how to better use the internet to avoid harms? He doesn’t say. He just decides what he thinks is right.

Elsewhere, he argues that there’s really little downside to implementing his policy solutions, and the review tries to dig into just how wrong that is. Cutting off kids from methods of communication that many use to find their communities, or to communicate with far-flung friends and family, can be really harmful.

But, the thing that gets me the most is that anyone who has actually spent time on internet policy issues knows that every policy solution in the space involves very serious nuances and tradeoffs. Haidt jumps in to the deep end doing a YOLO belly flop with zero consideration of any of those tradeoffs.

He supports KOSA, despite the fact that pretty much everyone agrees it will do real harm to LGBTQ+ teens. He supports age verification, despite data protection experts noting that it’s a privacy nightmare and the Supreme Court saying it violates the First Amendment.

He tends to brush off any concerns in the manner of a person who is selling a book, but has never had to actually deal with the actual nuances, tradeoffs, and consequences of complicated policy decisions he doesn’t fully understand:

Some of Haidt’s suggestions are so disconnected from any actual research or data as to raise questions about exactly where he’s coming from. There’s an entire chapter talking about how the kids these days just need to be more spiritual and religious, which seems like an odd and out of place discussion in a book about social media (and, on a separate note there is at least some research suggesting that kids today are finding spirituality via social media).

When even his former co-author, Lukianoff, pointed out that Haidt’s proposals clearly violate the First Amendment, Haidt’s only response is to suggest that if First Amendment advocates get together, he’s sure they can figure out ways to do age verification that is Constitutional.

This is the classic “nerd harder” demands of a non-expert insisting that if actual experts try hard enough, surely they can make the impossible possible.

And my biggest concern in all this is that by playing up a new moral panic to sell books and the “Jonathan Haidt brand,” real harm is caused:

The actual harms to getting this wrong could be tremendous. By coddling the American parent, and letting them think they can cure what ails kids by simply limiting the internet access, real harm can be caused.

Kids who actually do rely on the internet to find community and social interactions could grow further isolated. Even worse, it stops parents and teachers from dealing with actual triggers and actual problems, allowing them to brush it off as “too much TikTok,” rather than whatever real cause might be at play. It also stops them from training kids how to use social media safely, which is an important skill these days.

Treating social media as inherently harmful for all kids (when the data, at best, suggests only a very small percentage struggle with it), also would remove a useful and helpful tool from many who can be taught to use it properly, to protect a small number of users who were not taught how to use it properly. Wouldn’t a better solution be to focus on helping everyone to use the tools properly and in an age appropriate manner?

As noted, there’s a lot more in there. Again, the full review clocks in at over 2000 words, but I’m hopeful that, even as Haidt’s book is getting widespread attention, people might, finally, begin to realize that he’s selling a bill of goods which appears to be a lot more harmful than the unproven harms he claims to be warning about.

Posted on Techdirt - 22 April 2024 @ 10:51am

Ctrl-Alt-Speech: The Difficulty Of Being A Teen Online

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

The episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Posted on Techdirt - 19 April 2024 @ 07:39pm

Senate Must Follow House’s Lead In Passing Fourth Amendment Is Not For Sale Act

The Fourth Amendment exists for a reason. It’s supposed to protect our private possessions and data from government snooping, unless they have a warrant. It doesn’t entirely prevent the government from getting access to data, they just need to show probable cause of a crime.

But, of course, the government doesn’t like to make the effort.

And these days, many government agencies (especially law enforcement) have decided to take the shortcut that money can buy: they’re just buying private data on the open market from data brokers and avoiding the whole issue of a warrant altogether.

This could be solved with a serious, thoughtful, comprehensive privacy bill. I’m hoping to have a post soon on the big APRA data privacy bill that’s getting attention lately (it’s a big bill, and I just haven’t had the time to go through the entire bill yet). In the meantime, though, there was some good news, with the House passing the “Fourth Amendment is Not For Sale Act,” which was originally introduced in the Senate by Ron Wyden and appears to have broad bipartisan support.

We wrote about it when it was first introduced, and again when the House voted it out of committee last year. The bill is not a comprehensive privacy bill, but it would close the loophole discussed above.

The Wyden bill just says that if a government agency wants to buy such data, if it would have otherwise needed a warrant to get that data in the first place, it should need to get a warrant to buy it in the market as well.

Anyway, the bill passed 219 to 199 in the House, and it was (thankfully) not a partisan vote at all.

Image

It is a bit disappointing that the vote was so close and that so many Representatives want to allow government agencies, including law enforcement, to be able to purchase private data to get around having to get a warrant. But, at least the majority voted in favor of the bill.

And now, it’s up to the Senate. Senator Wyden posted on Bluesky about how important this bill is, and hopefully the leadership of the Senate understand that as well.

Can confirm. This is a huge and necessary win for Americans' privacy, particularly after the Supreme Court gutted privacy protections under Roe. Now it's time for the Senate to do its job and follow suit.

[image or embed]

— Senator Ron Wyden (@wyden.senate.gov) Apr 17, 2024 at 3:30 PM

Posted on Techdirt - 19 April 2024 @ 03:08pm

‘Lol, No’ Is The Perfect Response To LAPD’s Nonsense ‘IP’ Threat Letter Over ‘Fuck The LAPD’ Shirt

We’ve had plenty of posts discussing all manner of behavior from the Los Angeles Police Dept. and/or the LAPD union here at Techdirt. As you might imagine if you’re a regular reader here, the majority of those posts haven’t exactly involved fawning praise for these supposed crimefighters. In fact, if you went on a reading blitz of those posts, you might even come away thinking, “You know what? Fuck the LAPD!”

Well, if you wanted to display your sentiments while you went about your day, you might go over to the Cola Corporation’s website to buy one particular shirt it had on offer there before they completely sold out.

Now, it’s not uncommon for misguided entities to issue intellectual property threat letters over t-shirts and apparel, even when it is of the sort that is obviously fair use. Given that, you might have thought it would be the Los Angeles Lakers that sent a nastygram to Cola Corp. After all, the logo in question is clearly a parody of the LA Lakers logo.

Nope!

It was the Los Angeles Police Foundation via its IMG representatives. The LAPF is something of a shadow financier of the LAPD for equipment, including all manner of tech and gear. We have no idea how an entertainment agency like IMG got in bed with these assbags, but it was IMG sending the threat letter you can see below, chock full of all kinds of claims to rights that the LAPF absolutely does not and could not have.

If you can’t see that, it’s a letter sent by Andrew Schmidt, who represents himself as the Senior Counsel to IMG Worldwide, saying:

RE: Request to Remove Infringing Material From www.thecolacorporation.com
Dear Sir/Madam:

I am writing on behalf of IMG Worldwide, LLC (“IMG”), IMG is the authorized representative of Los Angeles Police Foundation CLAPF) LAPF is one of two exclusive holders of intellectual property rights pertaining to trademarks, copyrights and other licensed indicia for (a) the Los Angeles Police Department Badge; (b) the Los Angeles Police Department Uniform; (c) the LAPD motto “To Protect and Serve”; and (d) the word “LAPD” as an acronym/abbreviation for the Los Angeles Police Department (collectively, the “LAPD IP”). Through extensive advertising, promotion and the substantial sale of a full range of licensed products embodying and pertaining to the LAPD IP, the LAPD IP has become famous throughout the world; and as such, carries immeasurable value to LAPF.

We are writing to you regarding an unauthorized use of the LAPD IP on products being sold on your website, www.thecolacorporation.com (the “Infringing Product”). The website URL and description for the Infringing Product is as follows:
https://www.thecolacorporation.com/products fack-the- lupd pos-1&sid=435934961&&variant=48461787234611 FUCK THE LAPD
For the avoidance of doubt, the aforementioned Infringing Product and the image associated therewith are in no way authorized or approved by LAPF or any of its duly authorized representatives.

This letter hereby serves as a statement that:

  1. The aforementioned Infringing Product and the image associated therewith violate LAPF’s rights in the LAPD IP
  2. These exclusive rights in and to the LAPD IP are being violated by the sale of the Infringing Product on your website at the URL mentioned above;
  3. [Contact info omitted]
  4. On information and belief, the use of the LAPD IP on the Infringing Products is not authorized by LAPF, LAPF’s authorized agents or representatives or the law.
  5. Under penalty of perjury, I hereby state that the above information is accurate and I am duly authorized to act on on behalf of the rights holder of the intellectual. property at issue I hereby request that you remove or disable access the above-mentioned materials and their corresponding URL’s as they appear on your services in as expedient a manner as possible.

So, where to begin? For starters, note how the letter breezily asserts copyright, trademark, and “other licensed indicia” without ever going into detail as to what it thinks it actually holds the rights to? That’s an “indicia” of a legal threat that is bloviating, with nothing to back it up. If you know what rights you have, you clearly state them. This letter does not.

If it’s a copyright play that the LAPF is trying to make, it’s going to go absolutely nowhere. The use is made for the purposes of parody and political commentary. It’s clearly fair use, and there are plenty of precedents to back that up. Second, what exactly is the copyright claim here? It’s not the logo. Again, if anything, that would be the Lakers’ claim to make. The only thing possibly related to the LAPD would be those letters: LAPD. And, no, the LAPD does not get to copyright the letters LAPD.

If it’s a trademark play instead, well, that might actually work even less for the LAPF, for any number of reasons. Again, this is parody and political commentary: both First Amendment rights that trump trademarks. More importantly, in trademark you have the question of the likelihood of confusion. We’re fairly sure the LAPF doesn’t want to make the case that the public would be confused into thinking that the Los Angeles Police Foundation was an organization that is putting out a “Fuck the LAPD” t-shirt. Finally, for there to be a trademark, there has to be a use in commerce. Is the LAPF selling “Fuck the LAPD” t-shirts? Doubtful.

But that’s all sort of besides the point, because the LAPF doesn’t have the rights IMG asserted in its letter. Again, the only possible claim that the LAPF can make here is that it has ownership to the letters LAPD. And it does not. Beyond the fact that it had no “creative” input into LAPD, the LAPD is a city’s law enforcement agency and you cannot copyright or trademark such a thing. And, as we’ve discussed multiple times in the past, government agencies don’t get to claim IP on their agency names. The only restrictions they can present are on deceptive uses of logos/seals/etc.

But that is clearly not the case here. And we already have some examples from a decade ago of government agencies demanding the removal of parody logos and… it not ending very well for the government. 

So, what is actually happening here is that the LAPF/LAPD (via IMG) is pretending it has the right to screw with private citizens in ways it absolutely does not, and is using those false rights to harass those private persons with threatening behavior to intimidate them into doing what the LAPF wants. Which, if I’m being totally honest here, is certainly on brand as roughly the most police-y thing it could do in response to a simple t-shirt that is no longer even for sale.

Now, you might imagine that the Cola Corporation’s own legal team would reply to the silly threat letter outlining all of the above, crafting a careful and articulate narrative responding to all the points raised by the LAPF, and ensuring that their full legal skills were on display.

Instead, the company brought on former Techdirt podcast guest, lawyer Mike Dunford, who crafted something that is ultimately even better.

If you can’t read that, you’re not missing much. It says:

Andrew,

Lol, no.

Sincerely,
Mike Dunford

Perfect. No notes. May it go down in history alongside Arkell v. Pressdam, or the infamous Cleveland Browns response to a fan complaining about paper airplanes, as the perfect way to respond to absolutely ridiculous legal threat letters.

For what it’s worth, Dunford’s boss, Akiva Cohen, noted that this letter was “a fun one to edit.” We can only imagine.

This was a fun one to edit

[image or embed]

— AkivaMCohen (@akivamcohen.bsky.social) Apr 18, 2024 at 2:47 PM

More posts from Mike Masnick >>