I think we can all agree that nobody seems to be taking the business of governing ourselves terribly seriously.
I say we can all agree on this because I think we all know that Donald Trump is a deranged, narcissistic criminal. We know this. Even those of you who nominally support him — because you have convinced yourself that he is the only thing standing between you and everything you hold dear, however you define that.
For some of you, I think the answer is quite pathetic, if we are being honest. The relationships you have built with your donors, with your executive networks, with all the perquisites that come with being an insider — you simply want to hold on to that, because you feel like that is where the action is. That is power. That is being somebody. That is mattering.
The truth is, you do matter. You hold a seat in the Congress of the United States. The most beautiful idea in the history of human civilization: that a people could govern themselves. Without a king or potentate. We used to be quite proud of that in this country. That we brought the ancient yearning to breathe free from the bondage of primitive hierarchies into the world by founding a republic of laws. Not of men. Not of women. Of laws.
⁂
You are, in some ways, the most powerful people who have ever lived in the history of human civilization. Because you, as members of the Article I Branch, have all the power you need to bring an end to this — if you can remember that you are human beings and Americans, and awaken to some semblance of your humanity. If you can make good on your Oath of Office: to defend the Constitution of the United States, that was fought and died for, that is burning before the eyes of history. You are its sworn stewards.
And yet you stand still.
⁂
An unconstitutional war has been started without your authorization. The Strait of Hormuz is closed. The largest liquid natural gas facility in the Gulf is in flames. The supply shock now working its way through global energy, transportation, and food chains will make the inflation of 2021 to 2023 look like a mild inconvenience. That crisis destabilized governments across the democratic world and handed demagogues the opening they needed. What is coming is not in the same category.
A foreign head of state — a corrupt, post-truth demagogue, operating under criminal indictment in his own country, who attempted to seize control of his nation’s Supreme Court to keep himself out of jail, who channeled money to Hamas to pursue a divide-and-rule strategy against his own political opponents — has manipulated the machinery of the United States federal government, under color of Article II, to commit this nation to armed conflict without the consent of the governed. Without your vote. Without your authorization. In direct violation of the constitutional order you swore to defend.
Trump should already have been impeached and removed for giving the directive. That you have not done so is a fact history will record with considerable unkindness.
⁂
So as you look back on history, and think toward the future — do you simply have no shame? Do you feel nothing for the suffering you are permitting to persist? For the crimes that continue to accumulate? For the breach of sacred faith with those who gave the full measure of their devotion to this country, so that it might be free?
You can stop this. You have the power. The Constitution gave it to you precisely for moments like this one — precisely because the founders understood that executives with unchecked power tend, in time, toward exactly what we are watching now. They designed a remedy. They placed it in your hands.
The most powerful legislative body in the history of human civilization is watching the republic burn, and doing nothing, because its members have decided that their donor relationships and their insider access and their sense of being somebody matter more than the Oath they took, the Constitution they swore to defend, and the people whose lives are about to get very much worse because of a war they did not authorize and cannot stop.
You can quite literally help save the world. Because it needs saving right now.
I suggest you save it.
— Mike Brock
God bless America. May the Star-Spangled Banner yet wave over these lands.
Mike Brock is a former tech exec who was on the leadership team at Block. Originally published at his Notes From the Circus.
Last week we covered how the government successfully convinced Judge Colleen McMahon to order the plaintiffs in the DOGE/National Endowment for the Humanities (NEH) lawsuit to “claw back” the viral deposition videos they had posted to YouTube — videos showing DOGE operatives Justin Fox and Nate Cavanaugh stumbling through questions about how they used ChatGPT to decide which humanities grants to kill, and struggling mightily to define “DEI” despite it apparently being the entire basis for their work.
The government’s argument was that the videos had led to harassment and death threats against Fox and Cavanaugh — the same two who had no problem obliterating hundreds of millions in already approved grants with a simplistic ChatGPT prompt, but apparently couldn’t handle the public seeing them struggle to explain themselves under oath. The government argued the videos needed to come down. The judge initially agreed and ordered the plaintiffs to pull them. As we noted at the time, archivists had already uploaded copies to the Internet Archive and distributed them as torrents, because that’s how the internet works.
The ruling is worth reading in full, because McMahon manages to be critical of both sides while ultimately landing firmly against the government’s attempt to suppress the videos. She spends a good chunk of the opinion scolding the plaintiffs for what she clearly views as a procedural end-run — they sent the full deposition videos to chambers on a thumb drive without ever filing them on the docket or seeking permission to do so, which she sees as a transparent attempt to manufacture a “judicial documents” argument that would give the videos a presumption of public access.
McMahon doesn’t buy it:
When deciding a motion for summary judgment, the Court wants only those portions of a deposition on which a movant actually relies, and does not want to be burdened with irrelevant testimony merely because counsel chose to, or found it more convenient to, submit it. And because videos cannot be filed on the public docket without leave of court, there was no need for the rule to contain a specific reference to video transcriptions; the only way to get such materials on the docket (and so before the Court) was to make a motion, giving the Court the opportunity to decide whether the videos should be publicly docketed. This Plaintiffs did not do.
But if Plaintiffs wanted to know whether the Court’s rule applied to video-recorded depositions, they could easily have sought clarification – just as they could easily have filed a motion seeking leave to have the Clerk of Court accept the videos and place them on the public record. Again, they did not. At the hearing held on March 17, 2026, on Defendants’ present motion for a protective order, counsel for ACLS Plaintiffs, Daniel Jacobson, acknowledged the reason, stating “Frankly, your Honor, part of it was just the amount of time that it would have taken” to submit only the portions of the videos on which Plaintiffs intended to rely. Hr’g Tr., 15:6–7. In other words, “It would have been too much work.” That is not an acceptable excuse.
The Court is left with the firm impression that at least “part of” the reason counsel did not ask for clarification was because they wished to manufacture a “judicial documents” argument and did not wish to be told they could not do so. The Court declines to indulge that tactic.
Fair enough. But having knocked the plaintiffs for their procedural maneuver, the judge then turns to the actual question: has the government shown “good cause” under Rule 26(c) to justify a protective order keeping the videos off the internet? And the answer is a pretty resounding no. And that’s because public officials acting in their official capacities have significantly diminished privacy interests in their official conduct:
The Government’s motion fails for three independent reasons. First, the materials at issue concern the conduct of public officials acting in their official capacities, which substantially diminishes any cognizable privacy interest and weighs against restriction. Second, the Government has not made the particularized showing of a “clearly defined, specific and serious injury” required by Rule 26(c). Third, the Government has not demonstrated that the prospective relief it seeks would be effective in preventing the harms it identifies, particularly where those harms arise from the conduct of third-party actors beyond the control of the parties.
She cites Garrison v. Louisiana (the case that extended the “actual malice” standard from NY Times v. Sullivan) for the proposition that the public’s interest “necessarily includes anything which might touch on an official’s fitness for office,” and that “[f]ew personal attributes are more germane to fitness for office than dishonesty, malfeasance, or improper motivation.” Given that these depositions are literally about how government officials decided to terminate hundreds of millions of dollars in grants, that framing fits.
The judge also directly calls out the government’s arguments about harassment and reputational harm, and essentially says: that’s the cost of being a public official whose official conduct is being scrutinized. Suck it up, DOGE bros.
Reputational injury, public criticism, and even harsh commentary are not unexpected consequences of disclosing information about public conduct. They are foreseeable incidents of public scrutiny concerning government action. Where, as here, the material sought to be shielded by a protective order is testimony about the actions of government officials acting in their official capacities, embarrassment and reputational harm arising from the public’s reaction to official conduct is not the sort of harm against which Rule 26(c) protects. Public officials “accept certain necessary consequences” of involvement in public affairs, including “closer public scrutiny than might otherwise be the case.”
As for the death threats and harassment — which McMahon explicitly says she takes seriously and calls “deeply troubling” and “highly inappropriate” — she notes that there are actual laws against threats and cyberstalking, and that Rule 26(c) protective orders aren’t a substitute for law enforcement doing its job:
There are laws against threats and harassment; the Government and its witnesses have every right to ask law enforcement to take action against those who engage in such conduct, by enforcing federal prohibitions on interstate threats and cyberstalking, see, e.g., 18 U.S.C. §§ 875(c), 2261A, as well as comparable state laws. Rule 26(c) is not a substitute for those remedies.
And then there’s the practical reality McMahon acknowledges directly: it’s too damn late. The videos have already spread everywhere. A protective order aimed solely at the plaintiffs would accomplish approximately nothing.
At bottom, the Government has not shown that the relief it seeks is capable of addressing the harm it identifies. The videos have already been widely disseminated across multiple platforms, including YouTube, X, TikTok, Instagram, and Reddit, where they have been shared, reposted, and viewed by at least hundreds of thousands of users, resulting in near-instantaneous and effectively permanent global distribution. This is a predictable consequence of dissemination in the modern digital environment, where content can be copied, redistributed, and indefinitely preserved beyond the control of any single actor. Given this reality, a protective order directed solely at Plaintiffs would not meaningfully limit further dissemination or mitigate the Government’s asserted harms.
Separately, the plaintiffs asked for attorney’s fees, and McMahon denied that too, noting that she wasn’t going to “reward Plaintiffs for bypassing its procedures” even though the government’s motion ultimately failed. So everyone gets a little bit scolded here. But the bottom line is clear: you don’t get to send unqualified DOGE kids to nuke hundreds of millions in grants using a ChatGPT prompt, and then ask a court to hide the video of them trying to explain themselves under oath.
Releasing full deposition videos is certainly not the norm, but given that these are government officials who were making massively consequential decisions with a chatbot and no discernible expertise, the world is much better off with this kind of transparency — even if Justin and Nate had to face some people on the internet making fun of them for it.
By any means, necessary or not: that’s how this administration gets its bigoted version of immigration enforcement done. The surges targeting cities and states that Trump doesn’t feel are loyal enough are a double-edged sword. They punish states run by Democratic party members simply for being run by Democratic party members. And they flood courts with more cases than they can possibly handle, allowing the government to deny rights/deport people at scale.
The government doesn’t always get away with it. But given the scale, the government generally doesn’t get reined in until long after massive amounts of damage has been done.
That’s the case here in Maryland, where a lawsuit, that was initiated shortly after Trump began sending Venezuelans to El Salvador’s hellhole prison for purely punitive reasons, continues to play out. It involves a Venezuelan asylum seeker who was ejected from the country via Trump’s non-wartime invocation of the Alien Enemies Act to excuse the government’s refusal to respect due process rights.
As is the case with many federal judges dealing with Trump’s war on migrants, Maryland federal judge Stephanie Gallagher no longer takes the government at its word. That’s why she has been ordering immigration officials to testify in court, where they can be cross-examined and/or questioned by the judge herself.
And that’s the last thing this government wants, because it can’t even survive the minimal judicial scrutiny of its filed motions, which are usually crafted by teams of lawyers and not by the front-line employees and supervisors judges are ordering to testify.
David Kurtz of Talking Points Memo attended a recent hearing hosted by Judge Gallagher in this long-running case. Gallagher and the plaintiff’s attorney wanted to know why the government seemed to be violating an existing court order when it wrongfully removed two other asylum seekers in February.
What they heard instead was the perhaps inadvertent admission by the government that the three known (and potentially illegal removals) being discussed were pretty much just a rounding error:
Before today, the number of wrongfully deported asylum seekers in the case was thought to be less than a dozen. But under persistent questioning from plaintiff’s counsel, U.S. Citizenship and Immigration Services asylum officer Kimberly Sicard testified that in the past three to four weeks it had come to her attention that more than 100 asylum seekers covered by the settlement agreement have been removed. She put the number in the “low 100s.”
That’s insane. Those are the actions of a government that truly does not care what illegal acts it engages in so long as they contribute to the end goal of subtracting non-white people from this nation.
And it’s obviously intentional. That much was made clear in Sicard’s testimony.
Asked how the additional removals had come to her attention, Sicard said she wasn’t sure of the exact process but that officials had “queried systems.” As part of the process of notifying ICE of the wrongful removals, the matter went to the office of chief counsel at USCIS three to four weeks ago, Sicard said.
That means the government can query its detention databases in order to prevent possibly illegal removals. It also means the government can find out how many illegal removals it might have engaged in. The “three or four weeks” just means the USCIS chief counsel spent a lot of time trying to figure out how to legally justify illegal removals that now total in the “low hundreds.” And it means all of these things are either rarely used (or, more likely, deliberately ignored) by government agencies that have all been tasked with respecting rights first and carrying out their missions second.
Speaking of ignoring things, this revelation may never have occurred if the government had even attempted to comply with the judge’s previous court order:
The revelation was the pinnacle of a day of frustration for Gallagher. She had listed in her order calling the hearing five topics on which she expected the Trump administration to produce witnesses “with personal knowledge” to testify. The government failed to produce such witnesses.
“Failed” just means “refused” under Trump and his bigoted sidekicks. Because this administration felt this was just another court order it could ignore, someone without “personal knowledge” of the topics under discussion was sent to court to take the heat. And because she wasn’t expected to offer anything but shrugs, the USCIS lawyer responded honestly to questions that apparently weren’t covered by whatever minimal guidance DHS offered before she was put on the stand.
It’s this sort of sloppy arrogance that’s going to continue to derail some of the worst things this administration wants to do. And we’re safe to assume the arrogance and sloppiness will continue, because Trump has made absolutely no effort to rid himself of loyalists, no matter how sloppy, stupid, and undeservedly arrogant they are.
Geekey is an innovative, compact multi-tool like nothing seen before. It’s truly a work of art with engineering that combines everyday common tools into one sleek little punch that delivers endless capability. Geekey features many common tools that have been used for decades and proven essential for everyday fixes. It’s on sale for $19.55 with the code MARCH15 at checkout.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
I remember, pretty clearly, my excitement over the early World Wide Web. I had been on the internet for a year or two at that point, mostly using IRC, Usenet, and Gopher (along with email, naturally). Some friends I had met on Usenet were students at the University of Illinois at Urbana-Champaign, and told me to download NCSA Mosaic (this would have been early 1994). And suddenly the possibility of the internet as a visual medium became clear. I rushed down to the university bookstore and picked up a giant 400ish page book on building websites with HTML (I only finally got rid of that book a few years ago). I don’t think I ever read beyond the first chapter. But what I did do was learn how to right click on webpages and “view source.”
And from that, magic came.
I had played around with trying to build websites, and I remember another friend telling me about GeoCities (I can’t quite recall if this was before or after they had changed their name from their original “Beverly Hills Internet”) handing out web sites for free. You just had to create the HTML pages and upload them via FTP.
And so I started designing really crappy websites. I don’t remember what the early ones had, but like all early websites they probably used the blink tag and had under construction images and eventually a “web counter.”
But the thing I do remember was the first time I came across Derek Powazek’s Fray online magazine. It was the first time I had seen a website look beautiful. This was without CSS and without Javascript. I still remember quite clearly an “issue” of Fray that used frames to create some kind of “doors” you could slide open to reveal an article inside.
Right click. View source. Copy. Mess around. A week later I had my own (very different) version of the sliding doors on my GeoCities site, but using the same HTML bones as Derek’s brilliant work.
You could just build stuff. You could look at what others were doing and play around with it. Copy the source, make adjustments, try things, and have something new. There were, certainly, limitations of the technology, but it was incredibly easy for anyone to pick up. Yes, you had to “learn” HTML, but you could pick up enough basics in an afternoon to build a decent looking website.
But then two things happened, and it’s worth separating them because they’re different problems with different causes.
First, the technical barrier went up. CSS and Javascript opened up incredible possibilities to make websites beautiful and interactive, but they also meant it was a lot more difficult to just view source, copy, and mess around. The gap between “basic functional website” and “actually looks good” widened into a chasm that required real expertise to cross. Plenty of dedicated people learned these skills, but the casual tinkerer — the person who’d spend an afternoon copying Derek’s frames to make sliding doors — increasingly couldn’t keep up.
But the technical complexity alone didn’t kill amateur web building. The centralization did. While there was an interim period where people set up their own blogs, it quickly moved to walled “social media gardens” where some giant tech company decided what your page looked like. Why bother learning CSS when you could just dump text in a Facebook box and reach more people? The incentive to build your own thing evaporated, replaced by the convenience of posting to someone else’s platform under someone else’s (hopefully benign) rules.
These two problems reinforced each other. The harder it got to build your own thing, the more attractive the walled gardens became. The more people moved to walled gardens, the less reason there was to learn to build.
The rise of agentic AI tools is opening up an opportunity to bring us back to that original world of wonder where you could just build what you wanted, even without a CS degree. And here I need to be specific about what I mean by “agentic AI” — because too many people are overly focused on the chatbots that answer questions or generate text or images for you. I’m talking about AI systems that can actually do things: write code, execute it, debug it, iterate on it based on your feedback. Tools like Claude Code, Cursor, Codex, Antigravity, or similar coding agents that can take a description of what you want and actually build it.
For all those years that tech bros would shout “learn to code” at journalists, the reality now is that being able to write well and accurately describe things is a superpower that is even better than code. You can tell a coding agent what to do… and for the most part it will do it.
Let me give you the example that still kind of blows my mind. A few weeks ago, in the course of a Saturday — most of which I actually spent building a fence in my yard — I had a coding agent build an entire video conferencing platform. It built a completely functional platform with specific features I’d wanted for years but couldn’t find in existing tools. I’ve now used it for actual staff meetings. The fence took longer to build than the software.
All it took was describing what I wanted to an agent that could code it for me. And it addresses both problems I described earlier: it lowers the technical barrier back down to “can you describe what you want clearly?” while also enabling you to build your own thing rather than accepting whatever some platform offers you.
Over the last few months I’ve been finding I need to retrain my brain a bit about what we accept and learn to deal with vs. what we can fix ourselves. In the past I’ve talked about the learned helplessness many people feel about the tech that we use. We know that it’s vaguely working against us, and we all have to figure out what trade-offs we’re willing to accept to accomplish whatever goals we have.
But what if we could just fix things rather than accepting the tradeoffs?
I’ve talked in the past about how I’ve used an AI-assisted writing tool called Lex over the past few years, which doesn’t write for me, but is a very useful editorial assistant. Over the last few months, though, I decided to see if I could effectively rebuild that tool myself, fully controlled by me, without having to rely on a company that might change or enshittify the app. I actually built it directly into the other big AI experiment I’ve spoken about: my task management tool, which I’ve also moved away from a third party hosting service onto a local machine. Indeed, I’m writing this article right now in this tool (I first created a task to write about it, and then by clicking a checkbox that it was a “writing project” it automatically opens up a blank page for me to write in, and when I’m done, I’ll click a button and it will do a first pass editorial review).
But the amazing thing to me is that I keep remembering I can fix anything I come across that doesn’t work the way I want it to. With any other software I have to adjust. With this software, I just say “oh hey, let’s change this.” I find that a few times a week I’ll make a small tweak here or there that just makes the software even better. In the past, I would just note a slight annoyance and figure out how to just deal with software not working the way I wanted. But now, my mind is open to the fact that I can just make it better. Myself.
An example: literally last night, I realized that the page in the task tool that lists all the writing projects I’m working on was getting cluttered by older completed projects that were listed as still being in “drafting” mode. With other tools (including the old writing tool I was using), I would just learn to mentally compartmentalize the fact that the list of articles was a mess and train myself to ignore the older articles and the digital clutter. But here, I could just lay out the issue to my coding agent, and after some back and forth, we came up with a system whereby once a task on the task management side was checked off as “completed” the corresponding writing project would similarly get marked as completed and then would be hidden away in a minimized list.
I keep coming across little things like this that, in the past, I would have been mildly annoyed by, but needed to live with. And it’s taking some effort to remind myself “wait, I don’t have to live with this, I can fix it.” Rather than training my brain to accept a product that doesn’t do what I want, I can just tell it to work better. And it does.
And, the more I do that, the more I start to open up my mind to possibilities that were impossible before. “Huh, wouldn’t it be nice if this tool also had this other feature? Let’s try it!” I find that the more I do this, the bigger my vision gets of what I can do because the large segment of things that were fundamentally impossible before are now open to me, just by describing what I want.
It really does give me that same underlying feeling that I felt when I was first playing around with HTML and being able to “just make things.” Except, now, it’s way more powerful. Rather than copying Derek’s use of HTML frames to create “sliding doors” on a webpage, I can create basically anything I dream up.
Then, when combined with open social protocols, you can build in social features or identity to any service as well — without having to worry about getting other users. They’re already there. For example, my task management tool sends me a “morning briefing” every day that, among other things, scans through Bluesky to see if there’s anything that might need my attention.
Now, there are legitimate criticisms of “vibe coded” tools. Critics point out that AI-generated code can be buggy, insecure, hard to maintain, and that users who can’t read the code can’t verify what it’s actually doing. These are real concerns — for certain contexts.
The thing is, most of these criticisms apply to tools being built as businesses to serve customers at scale. If you’re shipping code to millions of users who are depending on it, you absolutely need security audits, proper testing, maintainable architecture. But that’s not what I’m talking about. I’m talking about building totally customized, personal tools for yourself—tools where you’re the only user, where the stakes are “my task list doesn’t sync properly” rather than “customer data got leaked.”
There’s also a more subtle concern worth addressing: is this actually democratizing, or does it just shift which skills you need? After all, you still need to accurately describe what you want, debug when things go wrong, and understand what’s even possible. That’s different from learning HTML, but it’s still a skill. I think the honest answer is that the kind of skill needed has shifted. “Learn to code” becomes “learn to think clearly and describe things precisely” — which happens to be a superpower that writers, editors, and domain experts already have. The barrier has moved to territory that many more people already inhabit.
It’s also an area where you can easily start small, learn, and grow. I started by building a few smaller apps with simpler features, but the more I do, the more I realize what’s possible.
Also, I’d note that this is actually an area where the LLM chatbots are kind of useful. Before I kick off an actual project with a coding agent, I’ve found that talking it through with an LLM first helps sharpen my thinking on what to tell the agent. I don’t outsource my mind to the chatbot, and will often reject some of its suggestions, but in having the discussion before setting the agent to work, it often clarifies tradeoffs and makes me consider how to best phrase things when I do move over to the agent.
What gets missed in most conversations about AI and the open web: these two pieces need each other. Open social protocols without AI tools stay stuck in the domain of developers and the highly technical — which is exactly why adoption has been slow. And AI tools without open protocols just replicate the old problem: you’re building cool stuff, but you’re still trapped inside someone else’s walls.
Put them together, though, and something clicks. Open protocols like ATProto give AI agents bounded, consent-driven contexts to work in — your agent can scan your Bluesky feed because the protocol allows that, not because some company decided to grant API access that it could revoke tomorrow. And AI agents give regular people the ability to actually build on those protocols without needing an engineering team. My morning briefing tool scans Bluesky not because I wrote a bunch of API calls, but because I described what I wanted and a coding agent made it happen.
Each piece makes the other more powerful and safer.
Blaine Cook — who was Twitter’s original architect back when it was still a protocol-minded company — recently wrote a piece at New_ Public that gets at this from the infrastructure side:
My long-standing hope has been that we’re able to move past the extractive, monopolizing, and competitive phase of social networks, and into a new era of creativity, collaboration, and diversity. I believe we’re poised to see a Cambrian explosion of new ways to interact online, and there’s evidence to suggest that it’s already happening: just today, I saw three new apps to share what you’re reading and watching with friends, each with their own unique take on the subject!
In this light, LLMs may be a killer app for decentralized networks — and decentralized networks may be the missing constraint that makes LLM integrations safer, more legible, and more aligned with user interests. It’s a symbiosis, and I believe we need both pieces. Rather than trying to integrate LLMs with everything, I think that deliberately bounded, consent-driven integrations will produce better outcomes.
Cook’s framing of LLMs as a “killer app for decentralized networks” is exactly right — and it runs the other way too. Decentralized networks might be the killer app for making AI tools something other than another vector for corporate lock-in, or just another clone of an existing centralized service.
Now, I can already hear the objection, and it’s a fair one: am I really suggesting we escape dependence on giant tech platforms by… becoming dependent on giant AI companies? Companies that have scraped the entire web, that burn massive amounts of energy and water, that are built on the labor of underpaid content moderators, and that seem to want to consolidate power in ways that look an awful lot like the last generation of tech giants?
Yeah, I get it. If the pitch is “use OpenAI to free yourself from Meta,” that’s just switching landlords.
But that’s not actually where this is heading. The trajectory matters more than the current snapshot.
First, if you’re using frontier models through the API or a pro subscription, you have significantly more control than most people realize. Your data generally isn’t feeding back into training. You’re using the model as a tool, not handing over your content to a platform. That’s a meaningfully different relationship than the one you have with social media companies, where you’re feeding them data, and their business model is based on monetizing that data.
But much more importantly, you don’t have to use the frontier models at all. Open source AI is maturing fast — models like Qwen, Kimi, and Mistral can run entirely on certain hardware, no cloud required. They’re behind the frontier models, but only by a bit. Six months to a year, roughly. But for a lot of the “build your own tools” use cases I’m describing, they’re already good enough.
Musician and YouTuber Rick Beato recently showed how easy it was for him to install local models on his own machine, and why he thinks the largest AI companies will eventually be undercut by home AI usage:
I’ve been doing something similar with Ollama hosting a Qwen model locally. It’s slower and less sophisticated. But it works. And I already use different models for different tasks, defaulting to local when I can. As those models improve — and they are improving quickly — the frontier labs become less necessary, not more. If you’re a professional, perhaps you’ll still need them. But if you’re just building something for yourself, it’s less and less necessary.
This is what the “AI is just another Big Tech power grab” critics are missing: the technology is moving toward decentralization, not away from it. That’s unusual. Social media started decentralized and got captured. AI is starting captured and getting more open over time. The economic pressure from open source models is real, and it’s pushing in the right direction. But it’s important we keep things moving that way and not slow down the development of open source LLMs.
On the training data question — which is a legitimate concern whether or not you think training on copyrighted works is fair use — efforts like Common Corpus are building large-scale training sets from public domain and openly licensed materials. Anil Dash has been writing about what “good AI” looks like in practice — AI that’s transparent about its training data, that respects consent, that minimizes externalities rather than ignoring them. There are ways to do this right.
None of this is fully solved yet. But the direction is clear, and the tools to do it responsibly are improving faster than most critics acknowledge.
When you use AI as a tool (rather than letting it use you as the tool), it can give you a kind of superpower to get past the learned helplessness of relying on whatever choices some billionaire or random product manager made for you. You can get past having to mentally compensate for your tools not really working the way you think they should work. Instead, you can just have the internet and your tools work the way you want them to. It’s the most excited I’ve been about the open web since those early days of realizing I could right click, copy and then figure out how to build sliding doors out of frames.
The promise of the open web was colonized by internet giants. But the power of LLMs and agentic coding means we can start to take it back. We can build customized, personal software for ourselves that does what we want. We can connect with communities via open social protocols that allow us to control the relationship rather than a billionaire intermediary. This is what the Resonant Computing Manifesto was all about, and why I’ve argued ATproto is so key to that vision.
But the other part of realizing the manifesto is the LLM side. That made some people scoff early on, but hopefully this piece shows how these things work hand in hand. These agentic AI tools give the power back to you and me.
Thirty years ago, I right-clicked on Derek Powazek’s beautiful website, viewed the source, copied it, messed around with it, and built something new. I didn’t ask anyone’s permission. I didn’t agree to terms of service. I didn’t fit my ideas into someone else’s template. I just built the thing I wanted to build.
Then we gave that away. We traded it for convenience, for reach, for the path of least resistance — and we got walled gardens, manipulated feeds, and the quiet understanding that our tools would never quite work the way we wanted them to, because they weren’t really ours.
Today’s equivalent of right-clicking on Derek’s site is describing what you want to a coding agent, watching it build, telling it what’s wrong, and iterating until it works for you. Different mechanics, same magic. And this time, with open protocols and increasingly open models, we have a shot at keeping it.
Taking a break from attacking the First Amendment, FCC boss Brendan Carr this week engaged in a strange bit of performance art: his FCC announced that they’d be effectively adding all foreign-made routers to the agency’s “covered list,” in a bid to ban their sale in the United States.
That is unless manufacturers obtain “conditional approval” (including all appropriate application fees and favors, of course) from the Trump administration via the Department of Defense or Department of Homeland Security. In other words, the Trump administration is attempting to shake down manufacturers of all routers manufactured outside the United States (which again, is nearly all of them) under the pretense of cybersecurity.
You can probably see how this might result in some looming legal action. And who knows what other “favors” to the Trump administration might be required to get conditional approval, like the inclusion of backdoors accessible by our current authoritarian government.
A fact sheet insists this was all necessary because many foreign routers have been exploited by foreign actors:
“Recently, malicious state and non-state sponsored cyber attackers have increasingly leveraged the vulnerabilities in small and home office routers produced abroad to carry out direct attacks against American civilians in their homes.”
We’ve discussed at length that while Brendan Carr loves to pretend he’s doing important things on cybersecurity, most of his policies have made the U.S. less secure. Like his mindless deregulation of the privacy and security standards of domestic telecoms and hardware makers. Or his destruction of smart home testing programs just because they had some operations in China.
Most of the Trump administration “cybersecurity” solutions have been indistinguishable from a foreign attack. They’ve gutted numerous government cybersecurity programs (including a board investigating Salt Typhoon), and dismantled the Cyber Safety Review Board (CSRB) (responsible for investigating significant cybersecurity incidents). The administration claims to be worried about cybersecurity, but then goes out of its way to ensure domestic telecoms see no meaningful oversight whatsoever.
I’d argue Trump administration destruction of corporate oversight of domestic telecom privacy/security standards is a much bigger threat to national security and consumer safety than 90% of foreign routers, but good luck finding any news outlet that brings that up in their coverage of the FCC’s latest move.
In reality, the biggest current threat to U.S. national security is the Trump administration’s rampant, historic corruption. Absolutely any time you see the Trump administration taking steps to “improve national security,” or “address cybersecurity” you can just easily assume there’s some ulterior motive of personal benefit to the president, as we saw when the great hyperventilation over TikTok was “fixed” by offloading the app to Trump’s dodgy billionaire friends.
The polarization over any and all uses of artificial intelligence and machine learning continues. And, to be clear, I very much understand why this is all so controversial. Any new technology that has the chance to be transformative will also necessarily be disruptive and that causes fear. Fear that is not entirely unfounded, no matter your other opinions on the matter. If that’s you, cool, I get it.
I’ll start this off by pointing to the latest edition of the Techdirt podcast in which both Mike and Karl engaged in a fantastic discussion about the use of AI. I’ve listened to it twice now; it’s that good. And, while I found myself arguing out loud with the both of them at certain points during the podcast, despite the fact that neither of them could hear my retorts, it presents a grounded, often nuanced conversation, which we need much more of in this space.
And now, in what might be a subconscious attempt by this writer to commit suicide by comments section, let’s talk about that controversial demo of NVIDIA’s forthcoming DLSS 5 technology. What DLSS 5 does compared with previous versions of the technology is indeed new, but what is not new is the introduction of AI and machine learning into the equation. DLSS 2 and 3 had that already, in the form of pixel reconstruction and frame generation. DLSS 5, however, introduced what is being labeled as “neural rendering”, which uses machine learning to alter the lighting and detailed appearances in environments and, most importantly, character rendering over the engine on top of the 2D image output. Here’s the video demo that got everyone talking.
The backlash to the video was wide, immediate, and furious. There was a great deal of talk about the alteration of artistic intent, about whether this changed what the original developers were attempting to portray when they created the games, and, of course, industry jobs. I want to talk about the major complaint pillars seen across many outlets below, but this backlash also supposedly came with death threats foisted upon NVIDIA employees. I would very much hope we could all at least agree that any threats of that nature are completely inappropriate and absurd.
With that, here is what I’ve seen in the backlash and what I’d want to say about it.
Get your damned AI out of my games!
Perhaps not the most common pushback I saw in all of this, but a very common one. And a silly one, too. As I mentioned above, DLSS versions already used some version of AI and machine learning. That isn’t new. How it’s applied is certainly new, but that isn’t the same as the demand to keep AI entirely out of the video game industry.
And if that’s where you are, go ahead and shake your fist at the clouds in the sky. AI is a tool and, as I’ve now said repeatedly, the conversation we should be having is how it’s used in gaming, not if it’s used. That’s because its use is largely a foregone conclusion and it is an open question as to whether its use will be a net benefit or negative overall to the industry. Dogmatic purists on AI have a stance that is understandable, but also untenable. We’re too far down this road to turn around and go home. And if the tech were able to lower the barriers of entry to the gaming industry, acting as the fertilizer that allows a thousand indie studios to sprout roots, would that really be so bad for the gaming ecosystem?
I can appreciate the purists’ point of view. I really can. I just don’t see where they have a place in the conversation when it comes to gaming.
It overrides artistic intent!
Does it? If it did, then hell yes that’s bad. But if it doesn’t, then this concern goes away entirely.
DLSS 5 is built with options and customizable sliders for game developers. That’s really, really important here. At the macro level, a developer that has decided to use DLSS 5, or decided and customized how it’s used in their games, is exercising consent over their products. That should be obvious.
But then we get into really interesting questions of art, the actual artist, and the ownership of that art, because those last two are very different things. As Digital Foundry outlines:
It may even raise consent and other questions surrounding artistic integrity. On site and witnessing the demos in motion, concerns about this seemed less of a problem when the games we saw had been signed off by the studios that made them – the contentious assets we’ve seen, likewise. Nothing from the DLSS 5 reveal released by Nvidia has not been approved by the studios that own those games. But perhaps the issue isn’t just about specific approvals by specific developers on agreed DLSS 5 integrations, but rather the whole concept of a GPU reinterpreting game visuals according to a neural model that has its own ideas about what photo-realism should look like.
While we’ve seen endorsements from Bethesda’s Todd Howard and Capcom’s Jun Takeuchi, to what extent does that consent apply to the entire development team and other artists associated with the production? And by extension, there is also the question of whether now is the right time to launch DLSS 5 at a time when the games industry is under enormous pressure, jobs are on the line and cost-cutting is a major focus in the triple-A space. The technology itself cannot function without the work of game creators – it needs final game imagery to work at all – but the extent to which it could be viewed as a worrying sign of “things to come” cannot be overstated bearing in mind the reactions elsewhere to generative AI.
That strikes me as a valid and interesting ethical question when it comes to the use of this technology, but one that is probably overwrought. Individual artists who work on video games already have their artistic output live at the pleasure of the game developers they contract with. Those developers already can use this game art in all kinds of ways that the individual artist may not have had in mind when creating it, or indeed have even considered such possibilities. DLSS 5 is just one more version of that, with the main difference being that it involves AI making changes to game images. That’s an important thing to consider, sure, but there are cousins to this ethical question that we’ve all come to accept already. This strikes me more as part of the “all AI is bad all the time” crowd finding a foothold in something other than dogma to grab onto.
Developers and publishers own their games. If they want to use DLSS 5 in those games, there is little other than specific work for hire or other contractual stipulations with individual artists that would keep them from implementing it. If artists don’t like that, I completely understand that point of view, but that’s what contract negotiations and language are for.
Bottom line: I have been as vocal as anyone arguing that video games are a form of art for well over a decade now and I struggle to agree that an optional technology that has approved buy in from game developers and publishers equates to “overriding artistic intent”, writ large.
The faces in these examples look like shit, are “yassified”, or suffer from the uncanny valley effect!
Look, here we’re going to get into matters of opinion. I have to say that when I viewed the demo video myself, I had the opposite reaction. And, yes, this opens me up to claims that I am somehow a massive fan of AI-created pornography (this is where the yassified comments come in), or that I just want all the characters to look “hot” (I’m too old for that shit), or that my older age of 44 means I’ve lost touch with what video games should look like. Despite my genuine respect for the dissenting opinions here, allow me to say this: bullshit.
The caveat to all of this is that the demo revealed very little in the way of this technology working within these games in motion. It’s also certainly true that NVIDIA chose the best potential images to show off its new technology. If the DLSS 5 rendering sucks out loud in a larger in-motion game, or if the images it creates end up being inconsistent throughout gameplay, or if it does just end up looking shitty, then I’ll be right there with you with a torch and pitchfork in hand.
And here’s the other thing to consider with this particular complaint, combined with the previous one about artistic intent: do any of you use visual mods in your games? I do. A ton of them. For a variety of reasons. I have used them to alter the faces and models for games like Starfield and Skyrim, among many others. Do I need to feel bad for altering the artist’s intent? Do I need to apologize for incorporating mods to make characters and environments appear in a way that helps me better connect with the game I’m playing?
Because I’m not going to do either. And I don’t expect you to. Nor do I expect game developers that choose to use this optional technology to beg for forgiveness for their own output.
The hardware demands to run all of this are insane!
Fine, then you’ll get what you want and nobody will be able to use this technology anyway. But I don’t think that will be the case. NVIDIA knows what it will take to run this tech once it leaves the demo stage and goes into production. The idea that they would hype up technology that nobody can use strikes me as unlikely in the extreme.
Conclusion: everyone take a breath
This still strikes me as more of a “all AI is bad” crowd grasping at lots of other things to buttress their pushback than anything else. AI has plenty, plenty of potential pitfalls. Worried about jobs in the gaming industry and elsewhere? Me too! But if you’re not also looking at the potential upsides for the industry, then you’re engaging in dogma, not conversation.
Will DLSS 5 be good? I have no idea and neither do you. Will DLSS 5 alter previously released games in a way that fundamentally alters how we play these games? I have no idea and neither do you. Will it negatively impact the gaming industry when it comes to the number of jobs within it? I have no idea and neither do you.
This was a tech demo. Details on how it works are still trickling out. Most recently, there has been some clarification as to the 2D rendering nature of the technology and what that means for the output on the screen. As an early demo of the technology, feedback is going to be important, so long as it’s informed and reasonable feedback.
The technology may end up being trash and hated for reasons other than “all AI is bad all the time.” If that ends up being the case, I trust the gaming market to work that out for itself. But a lot of the hand-wringing here looks to me to be speculative at best.
As many of the AI stories on Walled Culture attest, one of the most contentious areas in the latest stage of AI development concerns the sourcing of training data. To create high-quality large language models (LLMs) massive quantities of training data are required. In the current genAI stampede, many companies are simply scraping everything they can off the Internet. Quite how that will work out in legal terms is not yet clear. Although a few court cases involving the use of copyright material for training have been decided, many have not, and the detailed contours of the legal landscape remain uncertain.
However, there is an alternative to this “grab it all” approach. It involves using materials that are either in the public domain or released under a “permissive” license that allows LLMs to be trained on them without any problems. There’s plenty of such material online, but its scattered nature puts it at a serious disadvantage compared to downloading everything without worrying about licensing issues. To address that, the Common Corpus was created and released just over a year ago by the French startup Pleias. A press release from the AI Alliance explains the key characteristics of the Common Corpus:
Truly Open: contains only data that is permissively licensed and provenance is documented
Multilingual: mostly representing English and French data, but contains at least 1[billion] tokens for over 30 languages
Diverse: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers
Extensively Curated: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed.
There are five main categories of material: OpenGovernment, OpenCulture, OpenScience, OpenWeb, and OpenSource:
OpenGovernment contains Finance Commons, a dataset of financial documents from a range of governmental and regulatory bodies. Finance Commons is a multimodal dataset, including both text and PDF corpora. OpenGovernment also contains Legal Commons, a dataset of legal and administrative texts. OpenCulture contains cultural heritage data like books and newspapers. Many of these texts come from the 18th and 19th centuries, or even earlier.
OpenScience data primarily comes from publicly available academic and scientific publications, which are most often released as PDFs. OpenWeb contains datasets from YouTube Commons, a dataset of transcripts from public domain YouTube videos, and websites like Stack Exchange. Finally, OpenSource comprises code collected from GitHub repositories which were permissibly licensed.
The initial release contained over 2 trillion tokens – the usual way of measuring the volume of training material, where tokens can be whole words and parts of words. A significant recent update of the corpus has taken that to over 2.267 trillion tokens. Just as important as the greater size, is the wider reach: there are major additions of material from China, Japan, Korea, Brazil, India, Africa and South-East Asia. Specifically, the latest release contains data for eight languages with more than 10 billion tokens (English, French, German, Spanish, Italian, Polish, Greek, Latin) and 33 languages with more than 1 billion tokens. Because of the way the dataset has been selected and curated, it is possible to train LLMs on fully open data, which leads to auditable models. Moreover, as the original press release explains:
By providing clear provenance and using permissibly licensed data, Common Corpus exceeds the requirements of even the strictest regulations on AI training data, such as the EU AI Act. Pleias has also taken extensive steps to ensure GDPR compliance, by developing custom procedures to enable personally identifiable information (PII) removal for multilingual data. This makes Common Corpus an ideal foundation for secure, enterprise-grade models. Models trained on Common Corpus will be resilient to an increasingly regulated industry.
Another advantage for many users is that material with high “toxicity scores” has already been removed, thus ensuring that any LLMs trained on the Common Corpus will have fewer problems in this regard.
The Common Corpus is a great demonstration of the power of openness and permissive copyright licensing, and how they bring benefits that other approaches can’t match. For example: “Common Corpus makes it possible to train models compatible with the Open Source Initiative’s definition of open-source AI, which includes openness of use, meaning use is permitted for ‘any purpose and without having to ask for permission’. ” That fact, along with the multilingual nature of the Common Corpus, would make the latest version a great fit for any EU move to create “public AI” systems, something advocated on this blog a few months back. The French government is already backing the project, as are other organizations supporting openness:
The Corpus was built up with the support and concerted efforts of the AI Alliance, the French Ministry of Culture as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).
This dataset was also made in partnership with Wikimedia Enterprise and Wikidata/Wikimedia Germany. We’re also thankful to our partner Libraries Without Borders for continuous assistance on extending low resource language support.
The corpus was stored and processed with the generous support of the AI Alliance, Jean Zay (Eviden, Idris), Tracto AI, Mozilla.
The unique advantages of the Common Corpus mean that more governments should be supporting it as an alternative to proprietary systems, which generally remain black boxes in terms of where their training data comes from. Publishers too would also be wise to fund it, since it offers a powerful resource explicitly designed to avoid some of the thorniest copyright issues plaguing the generative AI field today.
All the people who have always brushed off concerns about surveillance tech, please come get your kids. And then let someone else raise them.
Lots of people are fine with mass surveillance because they believe the horseshit spewed by the immediate beneficiaries of this tech: law enforcement agencies that claim every encroachment on your rights might (MIGHT!) lead to the arrest of a dangerous criminal.
Running neck and neck with government surveillance state enthusiasts are extremely wealthy Americans. When they’re not adding new levels of surveillance to the businesses they own, they’re scattering cameras all around their gated communities and giving cops unfettered access to any images these cameras record.
Here’s how it plays out at the ground level: parents can’t get their kids enrolled in the nearest school because of surveillance tech. In one recent case, license plate reader data was used to deny enrollment because the data collected claimed the parent didn’t actually reside in the school district.
Just over a year ago, Thalía Sánchez became the proud owner of a home in Alsip. She decided to leave the bustle of the city for a quiet neighborhood setting and the best possible education for her daughter.
However, to this day, despite providing all required paperwork including her driver’s license, utility bills, vehicle registration, and mortgage statement, the Alsip Hazelgreen Oak Lawn School District 126 has repeatedly denied her daughter’s enrollment.
Why would the district do this? Well, it’s apparently because it has decided to trust the determinations made by its surveillance tech partner, rather than documents actually seen in person by the people making these determinations.
According to the school district, her daughter’s new student enrollment form was denied due to “license plate recognition software showing only Chicago addresses overnight” in July and August. In an email sent to Sánchez in August, the school district told her, “Although you are the owner on record of a house in our district boundaries, your license plate recognition shows that is not the place where you reside.”
But that’s obviously not true. Sanchez says the only reason plate reader data would have shown her car as “staying” in Chicago was because she lent it to a relative during that time period. The school insists this data is enough to overturn the documents she’s provided because… well, it doesn’t really say. It just claims it “relies” on this information gathering to determine residency for students.
All of this can be traced back to Thompson Reuters, which apparently has branched out into the AI-assisted, ALPR-enabled business of denying enrollment to students based on assumptions made by its software.
Here’s what little there is of additional information, as obtained by The Register while reporting on this case:
Thomson Reuters Clear, which more broadly is an AI-assisted records investigation tool, has a page dedicated to its application for school districts. It sells Clear as a tool for residency verification, claiming that it can “automate” such tasks with “enhanced reliability,” and can take care of them “in minutes, not months.”
One of the particular things the Clear page notes is its ability to access license plate data “and develop pattern of life information” that helps identify whether those who are claiming they’re residents for the sake of getting a kid enrolled in school are lying or not.
Thomson Reuters does not specify where it gets its license plate reader data and did not respond to questions.
We’ll get to the highlighted sentence in a moment, but let’s just take a beat and consider how creepy and weird this Thomson Reuters promotional pitch is:
The text reads:
Gain deeper insights into mismatched data to support meaningful conversations with families and ensure students are where they need to be. Identify where cars have been seen to establish pattern of life information.
No one expects a law enforcement agency to do this (at least without a warrant or reasonable suspicion), much less a school district. Government agencies shouldn’t have unfettered access to “pattern of life” information just because. It’s not like the people being surveilled here are engaged in criminal activity. They’re just trying to make sure their kids receive an education. And while there will always be people who game the system to get their kids into better schools, that’s hardly justification for subjecting every enrolling student’s family to expansive surveillance-enabled background checks.
And while Thomson Reuters (and the district itself) has refused to comment on the source of its plate reader data, it can safely be assumed that it’s Flock Safety. Flock Safety has never shown any concern about who accesses the data it compiles, much less why they choose to do it. Flock is swiftly becoming the leading provider of ALPR cameras and given its complete lack of internal or external oversight, it’s more than likely the case that its feeding this data to third parties like Thomson Reuters that are willing to pay a premium for data that simply can’t be had elsewhere.
We’re not catching criminals with this tech. Sure, it may happen now and then. But the real value is repeated resale of “pattern of life” data to whoever is willing to purchase it. That’s a massive problem that’s only going to get worse… full stop.