EFF's Patent Busting Project Scores Another Hit

from the more-good-news dept

While it’s taken quite some time, the EFF has had considerable success with its project to bust ten awful patents. The latest is that the USPTO has agreed to re-examine a patent from Seer Systems involving online music. Again, the really tragic thing about all of this is that the EFF started this patent busting project almost five years ago, and the process is still in its early stages. And that’s for ten of the most ridiculous patents you’ll find today. Think of what a mess it is to challenge so many other bad patents.

Filed Under: , ,
Companies: eff, seer systems

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “EFF's Patent Busting Project Scores Another Hit”

Subscribe: RSS Leave a comment
23 Comments
Lonnie E. Holder says:

Reexaminations & Easy

A reexamination essentially requires only one thing: a coherent explanation as to why a patent should not have been issued in the first place, nearly always supported by credible, dated documentation. A request for reexamination can be made by anyone, with submission of a fee (recalling that the USPTO actually makes a profit) and an explanation that raises a “substantial new question of patentability.”

If the evidence of the existence of prior art is clear, getting the reexamination is relatively easy. Once the USPTO agrees that the submitted documentation does in fact raise a “substantial new question of patentability,” it takes the USPTO time to complete the reexamination process.

I find little reason to be enthused when the USPTO “agrees to re-examine a patent.” Big deal. Pay the fee, provide the evidence and submit the explanation. If you have a good, coherent explanation as to the “substantial new question of patentability,” reexamination is virtually assured.

Mike (profile) says:

Re: Reexaminations & Easy

I find little reason to be enthused when the USPTO “agrees to re-examine a patent.” Big deal. Pay the fee, provide the evidence and submit the explanation. If you have a good, coherent explanation as to the “substantial new question of patentability,” reexamination is virtually assured.

Well, the fact that approx 3/4 of patents get claims tossed out on review makes it a decently big deal…

Not to mention, it shows what an awful job the USPTO does on its first round of reviews. You know, the ones after which the patent is presumed valid…

Lonnie E. Holder says:

Re: Re: Reexaminations & Easy

Mike:

Your post made it sound as though getting the USPTO to re-examine a patent was hard. My comment was in response to that portion of your post.

Your statistics are accurate. About 26% of all patents that are re-examined (about 500 per year) survive with all claims intact. About 10% are thrown out. The others survive with some claims eliminated.

However, I take issue with the “awful job the USPTO does on its first round of reviews.” About 157,000 patents issued in 2007. About 500 reexaminations are filed per year. Considering the numbers provided above, about 370 of those patents had some or all claims removed. That represents about a 0.24% error rate. Now, you could reasonably argue that many more patents would be challenged if people had the money, time and enthusiasm, and I suspect you would probably be correct, but assuming “awful job” strictly from the number of reexaminations is a stretch.

Lonnie E. Holder says:

Re: Oh I get it

I only wish it was that easy…

1. Charge fee to submit patent.
2. Reject patent, even when completely novel.
3. Submit office action response, explaining novelty.
4. Reject patent finally.
5. Submit appeal along with fee.
6. Obtain allowance.
7. Pay fee to allow patent to issue.

I have yet to see have been involved in a patent that was “rubber stamped.” I guess I have been getting the examiners who do a better job of examining applications.

Andrew D. Todd (user link) says:

Different Fields, Different Kinds of Patents

It all depends what field you are in. I have looked up Lonnie E. Holder’s patents,

http://www.patentgenius.com/inventor/HolderLonnieE.html

and he seems to be a designer of tractor transmissions, with a special interest in fitting tractors with the kind of “idiotproof” interlocks which automobiles and trucks have had for years. The tractor industry is necessarily something of a backwater. The kinds of questions which are considered interesting and legitimate in the highest reaches of an academic physics department (what is matter, what is energy, what is time?) are, one would think, very unlikely to arise in the course of tractor design.

If Mr. Holder were to sue General Motors, Ford, Chrysler, Toyota, Honda, Nissan, etc, claiming that his patents covered the Park setting in automobile automatic transmissions, I think there might very well be problems. However, as long as he leaves them alone, they will probably leave him alone.

Computer Science is different. My own run-in with the patent system had to do with what was in effect a kind of bachelor’s thesis in Philosophy.

http://www.techdirt.com/article.php?sid=20080327/195124677#c412

Computer science is inherently abstract, because it is a science of information. The question, “what is man,” becomes a guideline on how to build an automaton. The most theoretical work, done by people who are profoundly uninterested in being businessmen, tends to have profound economic implications at a long remove. If “In Re Bilski ” is interpreted correctly, it may have the effect of removing Computer Science from the scope of the patent system, and that may be best for both Computer Science and the patent system. Mr. Holder is rather in the position of a shark who has inadvertently swallowed a poisonous spiny pufferfish. It’s not good for either party.

Lonnie E. Holder says:

Re: Different Fields, Different Kinds of Patents

…and he seems to be a designer of tractor transmissions, with a special interest in fitting tractors with the kind of “idiotproof” interlocks which automobiles and trucks have had for years.

If tractor transmissions used the same mechanisms that automobiles and trucks have used for years, we could have copied their designs and saved a lot of time and trouble. Instead, we had to figure out how to do it with completely different mechanisms and technology.

The tractor industry is necessarily something of a backwater.

lol…Not if you are in it!

The kinds of questions which are considered interesting and legitimate in the highest reaches of an academic physics department (what is matter, what is energy, what is time?) are, one would think, very unlikely to arise in the course of tractor design.

Okay. And you point is…?

If Mr. Holder were to sue General Motors, Ford, Chrysler, Toyota, Honda, Nissan, etc, claiming that his patents covered the Park setting in automobile automatic transmissions, I think there might very well be problems. However, as long as he leaves them alone, they will probably leave him alone.

Well, we have never claimed a “park” position for our transmission. I assume someone, somewhere, made such a claim, probably decades ago. We have cited numerous automotive patents as our information disclosure statements. I will also point out that the transmission technology is quite different between the automotive companies and our little company, and it is extremely unlikely that either of us are using similar technology to address our problems.

Lonnie E. Holder says:

Re: Different Fields, Different Kinds of Patents

Computer science is inherently abstract, because it is a science of information. The question, “what is man,” becomes a guideline on how to build an automaton. The most theoretical work, done by people who are profoundly uninterested in being businessmen, tends to have profound economic implications at a long remove.

Okay, so your work is highly philosphical, and not inventive?

If “In Re Bilski ” is interpreted correctly, it may have the effect of removing Computer Science from the scope of the patent system, and that may be best for both Computer Science and the patent system.

I doubt that Bilski will remove computer science from the scope of the patent system. I hope that it will limit claims and the kinds of patents that can be issued on software, but only time will tell.

Mr. Holder is rather in the position of a shark who has inadvertently swallowed a poisonous spiny pufferfish. It’s not good for either party.

How do you come to that conclusion?

Lonnie E. Holder says:

Re: Re: Re: Different Fields, Different Kinds of Patents

AC:

But it is so fun. The more facts you present, the more people here get riled up. At first you hear, “back that up with facts.” Then you back up your statement with facts and links. Then you hear “Well, you can’t believe that site; it’s biased.” Like Techdirt is the paragon of objectivity! Or you hear, “Well, in this limited case it might have been the way you say, but that still does not address the original problem.” Oh, yeah. So, just because a certain percentage of time the systems fails to do what it should do we should throw the system out? Might as well eliminate social security, medicare, welfare, airplanes, automobiles, amusement parks, the military, highway departments, and just about every other human endeavor.

gene_cavanaugh (user link) says:

Patent Busting

Even though I am a patent attorney, I TOTALLY agree with Michael on this one!
There was a time that patents performed the function given in the Constitution, and could do so again, IMO. However, large entity patenting has corrupted the system so much that MAJOR changes are needed (though I do small entity patenting, which is still pure – unfortunately, small entities don’t have much money, so I am the only small entity attorney I know of!).

Lonnie E. Holder says:

Re: Patent Busting

Gene:

According to the SBA, small companies have more patents per employee than large companies. There are lots of small entity agents and attorneys, but I suspect you will find more in the Midwest (not very many software patents here though) than in California.

Patents do still perform the function given in the Constitution, but they also perform other functions. Yet, the number of patents involved in those other functions is relatively small.

One of the things I find fascinating is that there is a difference in paradigms between industries. For the most part, patents in the non-software industries and most of electronics do what patents have always done (or seem to). It seems that business method patents and software patents are a whole other issue.

Yet, there are always opportunities for improvement. Considering my experience, I think most manufacturing companies would prefer to see an improved patent system that issues fewer, stronger patents. There are numerous reasons for this, but I think most manufacturing companies (which, again in my experience, rarely sue others for patent infringement, but are often sued by small entities) would enjoy having reduced uncertainty and clarity in patents.

Lonnie E. Holder says:

Re: Re: Re: Patent Busting

According to the SBA, small companies have more patents per employee than large companies

Since when is “patents per employee” a meaningful indicator of anything?

So, you think that this data is an indicator of nothing? Then having a terabyte storage device is an indicator of nothing. A yellow light is an indicator of nothing. Data is ALWAYS and indicator of something. In this case, I believe it is an indicator of the high level of inventiveness and innovation in small companies.

Patents do still perform the function given in the Constitution

If only there was a shred of evidence to support that.

Which of the millions of shreds of evidence would you like?

Andrew D. Todd (user link) says:

Response to Lonnie E. Holder, # 10, 11

Look at it this way: if you really understand how biological life works, you can use that understanding to do genetic engineering. Here’s an aphorism I learned from studying Philosophy of Science. Explanation equals Prediction equals Technology. That is, Explanation of what has happened, equals Prediction of what will happen, equals Technology of how to make it happen. If you cannot Predict and do Technology, then the Explanation was false in the first place.

At the time of the Second World War, all kinds of very highbrow types, such as pure mathematicians, and physicists, philosophers, psychologists, artists, (classical) musicians, and even electrical engineers, got into the war effort, to stop Hitler and all that. It wasn’t just the hard scientists, of course. Anthropologists were apt to get parachuted behind enemy lines to organize the natives as guerrillas. At any rate, the scientists and suchlike were often put through crash courses in engineering and set to work, mostly on the more physics-oriented types of weapons. There was the Manhattan project, the atomic bomb, of course, but a lot of them wound up working on things like radar and sonar, or code-breaking. At the same time, this exposure to engineering got them thinking in their spare time about whether they could take an engineer’s approach to the ideas they had been thinking about back in their universities. This was the time and place when the electronic computer emerged. In particular, there was the philosopher Noam Chomsky. He began thinking about how one could explain human language and thought in “tinkertoy terms.” There had been a body of linguists before the war (the Prague School) who had been trying to formulate general laws of language, but Chomsky took it a step further, inventing Generative Grammar, the beginning of Artificial Intelligence. In the environment of MIT’s wartime laboratories, people shared ideas back and forth, and the idea of Generative Grammar cross-pollinated into the mind of a mathematician named John Backus, who developed the idea into the FORTRAN compiler. Chomsky’s students, and disciples, and anti-disciples, unto the third and fourth generation, pursued these merged ideas. Eventually, the wartime cohesion began to wear thin, especially under the strain of the Vietnam War. Chomsky eventually got into a titanic quarrel with Marvin Minsky, who was MIT’s major recipient of military DARPA funds for computer research. Probably the single biggest factor separating academic research from business practicality was cost. People were playing around with systems which, if fully implemented, would have required computers costing a trillion dollars, or some similarly modest sum. As costs came down over twenty or thirty years, these systems became much more practical than their original authors had ever imagined. Thus things like Google became possible.

Look at the “qsort” function in the C programming language– that is a fairly good example of what Computer Science is all about. You pass “qsort” some parameters, viz, the starting location of an array, the size of the records in the array, the number of records, and the address of a function– let’s call it “foobar”– which can tell “qsort” which of two records should go first. “qsort” calls “foobar” over and over again until the array is completely sorted. Sorting is treated as a kind of summation of the act of comparison. “qsort” does not know anything about the internal meaning of the records. If you are writing an implementation of “qsort,” you don’t need to know how many records there will be, or how long each record will be. You arrange to automatically copy that information from the input parameters. What you do know about the world outside the “qsort” routine is the precision of the standard integer– whether it is sixteen-bit, or thirty-two-bit, or sixty-four-bit. You also need to know the conventional method of representing machine addresses, which is not always the same as the standard integer. That is about all you need to know.

What I did was to attempt to address a philosopher’s problem by designing a type of “universal inference engine,” in the tradition of Chomsky, at about the same level of abstraction as “qsort.” At the time, the cost of actually building the thing would have been prohibitive, but as costs came down, with Moore’s Law, it became inconvenient prior art for anyone who might want to patent the idea of a universal inference engine. I don’t think I was particularly unique– lots of other young men were doing the same kind of thing at the same time, at various colleges and universities across the country. However, a more or less chance event resulted in my getting a paper publication out of it, and eventually meeting the legal definition of prior art. Bachelors’ theses and similar work were not normally published at the time, this being long before the internet.

If you were designing transmissions at that level, you would be creating a kind of archetype-transmission which would be equally suitable for an automobile, or a motor scooter, or a tractor, or a railroad locomotive. Possibly, you could create a CAD/CAM program which would design transmissions to order from their performance specifications. There are programs which design electronic circuit boards on that principle. If the automakers were to start using electric drive, the way it is used in certain kinds of large equipment (every wheel has its own motor, getting electricity from a common power bus fed by an onboard electric generating plant), you might find yourself using semi-surplus automobile parts for the usual excellent reasons (*), and the distinctive characteristics of tractor transmission design might be reduced to control systems.

(*) A typical production run for an automobile working part is likely to be five or ten million units, and the price accordingly low, and you need compelling reasons not to use such parts which, from the standpoint of specialist equipment, are effectively free and disposable. In electronics, over the years, a lot of specialist companies reached the point of saying that, given the development of small computers, there was no longer any good reason for them to be in the hardware business. They could not make as good a microprocessor for the money as Intel could, so they switched to software instead. It would be interesting to see if that applies to other kinds of industries as well. You probably know about “genset” railroad locomotives using multiple truck engines. There are about 20,000 railroad locomotives, versus about two million eighteen-wheeler trucks. That could be a problem for the specialist firms which make 3000-4000 hp diesel engines.

Lonnie E. Holder says:

Re: Response to Lonnie E. Holder, # 10, 11

Andrew:

Your post was perhaps the most philosophical post I have ever read on this web site. I re-read it several times, and am unsure of whether I grasped all the points you made.

Archetypical Transmission: We did a lot of studying of automotive transmissions to see whether there were lessons we could learn. However, the biggest problem we have is that our transmission never shift gears. No clutches. Indeed, the price of our cheapest transmissions is less than 10% the price of an automotive transmission.

We have also studied other transmissions, mechanical, hydraulic, and electrical. Again, the underlying requirements and technology were so different from ours that the technology was not transferable. So, it might be possible to create an archetypical transmission with gear shift ranges, or it might not. Thus far, after more than a century and a half of transmission design, no one else has been able to do so.

I was intrigued by your explanation -> prediction -> technology philosophy. This connection sounds much like what is attempted by Triz.

You might find one of our experiences interesting. We needed a new, low-cost return-to-neutral mechanism for our transmissions. Others had designed and used an array of such devices, but all of them had drawbacks. Finally, we changed paradigms and developed an RTN that was cheaper than previous RTN’s, easier to adjust, and used an approach that no one else had ever considered. In hindsight, once we developed it, we wondered why no one had ever come up with the design before (we searched hundreds of patents and never found a similar approach, and we looked at quite a few production designs from customers and competitors and similar found that no one else had used the approach we did).

Now, from the viewpoint of that design, you can provide the explanation and the prediction, and the developed technology provides the solutions previously used, which work, but which have problems of various kinds. What was missing was a technology that was easier to adjust (implying lower cost), a technology with fewer parts, and a technology that required less maintenance over life.

Hang around and provide more comments.

Andrew D. Todd (user link) says:

Response to Lonnie E. Holder, # 10, 11, 19

I found the TRIZ business interesting. Of course, it’s a case of common ancestry. All of this stuff goes back to eighteenth and nineteenth century German philosophy, whose most important representatives were Immanuel Kant (1724–1804) and Georg Wilhelm Friedrich Hegel (1770–1831). In 1945, a mathematician named George Polya published a book entitled _How to Solve It_, a kind of cookbook handbook of mathematical proof. A man named Allen Newell read this book, and in 1957, at the RAND Corporation, he implemented it in a program called the General Problem Solver, which could prove mathematical theorems by the Polya method. Eventually, such techniques turned up in practical programs such as Mathematica. More recently, people have gone further. They have developed Artificial Intelligence programs which are tied to laboratory robots, so that they can formulate hypotheses, design experiments, and command the robots in carrying out the experiments. The data is collected automatically, and fed back into the program to formulate revised hypotheses. Such programs are still at a comparatively early stage of development.

Software development works around archetypes. Bits of code are moved into functions or subroutines, gathered together into libraries or software objects, extended and generalized, and eventually built into the operating system, or else into a programming language. For software, the manufacturing costs are trivial, obviously. Beyond that, the cost of the memory which holds the program is trivial. About the only real hardware cost/performance issue is the rate at which the length of computation goes to infinity as the size of the data goes to infinity (Polynomial Time, Big O Notation, Theory of Computation, etc). This of course applies only in limited circumstances. The dominant cost is generally the cost of the programmer. A modern operating system, with all the trimmings, is huge, on the order of hundreds of millions of lines of code. Its dominant issue is that of complexity. People seek to control the complexity by always trying to formulated things in the most abstract fashion possible, and having the computer automatically chain from one formal definition to another.

Leading software companies commonly compete with discounts of 95% or more when they feel the need, such are the overwhelming economies of scale, doing things which would clearly constitute predatory pricing in conventional industry. In the last analysis, there is only room for one proprietary software company, and that is Microsoft. Bilski effectively declares software per se (personal computer software as distinct from certain embedded systems) to be the equivalent of paper-and-pencil calculations and/or books of instructions for performing such calculations. These are things which have traditionally been exempt from patentability. Software is not tractors, and tractors are not software. The Open-Source movement really likes Bilski, and if you believe in a free market in software, you have to believe in Open-Source, because there is no alternative. Microsoft’s proprietary rivals have collapsed, one after the other, unable to compete with 95% discounts. Microsoft itself is coming around to the view that Bilski is the lesser evil, given that the law is the same for them as for anyone else. About the only people involved with software patents who really hate Bilski are lawyers and financiers who are in the business of buying up patents and using them to sue with, and who do not themselves produce any software. “Non-Practicing Entities,” or more vulgarly, Patent Trolls.

Bilski does not reach anyone who actually designs and sells steel objects weighing a hundred pounds or whatever. The practical alternative to Bilski is to develop KSR v. Teleflex further, petitioning the Supreme Court to read the whole TRIZ textbook into American case law to form a “calculus of obviousness.” That could well prove a lot more onerous to conventional industry than Bilski could.

Physical objects have dimensions. One cannot speak of a software entity having dimensions in anything like the same way. Granted, one could have a program which caused a NC machine tool to cut a gear of such-and-such a diameter, with such-and-such a number of teeth, the numbers being keyed in from a console. But once cut, the gear would be reverting from the archetype towards the particular. Of course, you might have an electric motor/generator, with each winding having its own power transistors, and a controller to drive the different transistors and windings. In that case, the motor’s behavior is comparatively data-driven. Using it as generator, you can set drag by regulating the amount of current you put through the exciter winding. Using it as an induction motor, the key parameter is the “slip,” the difference in frequency between the current fed to the stator coils and the physical movement of the rotor. You can manage that by using a solid-state commutator to flip the polarity of the input current back and forth. I suppose this is the kind of thing which the Toyota Prius has in the controls of its hybrid transmission. This transmission consists of two motor/generators, one clutch to disengage the engine, and one planetary gear set with fixed connections, used as a differential (between the clutch and engine,;one of the motors; and the driveshaft, on which the other motor is mounted). By adjusting the exciter current on one motor, the controls can regulate how much of the engine’s power is transmitted through the mechanical coupling and how much goes through electric drive.

If the automakers were investing large sums, billions of dollars, in finding better and cheaper ways to make 100 hp electric motors, you might find that since the motors’ behavior was not “hardware-preprogrammed” to anything like the same extent as a conventional gearbox, it might be advantageous to use such motors even if they were not optimally designed for your situation.

This is essentially the situation which minor electronics manufacturers have found themselves it. Things like microcontrollers were simply so cheap, with the immediate prospect of becoming cheaper yet, that the counsel of prudence was to do as little hardware work as possible, and convert as much of the design as possible to software. For example, a modem might consist of a microcontroller, and one or more amplifiers to boost the microcontroller’s output up to line voltage.

Lonnie E. Holder says:

Re: Response to Lonnie E. Holder, # 10, 11, 19

Andrew:

You make my brain hurt. Other than that, neat post.

I see your point about archetypical software, and I understand.

I also think that while triz provides methodology, it does not provide devices. I could go into a lengthy explanation, but I suspect you would figure out the conclusion before I could write it.

I think your comment re electric motors was interesting. I heard a talk given by an Emerson engineering executive about a decade and a half ago. He explained why electric motors were commodities and the lengths that the two major companies, Emerson and General Electric, went to to save pennies per motor because of volumes. I keep waiting for a new breakthrough in motor technology, but it seems long in coming.

Motors are designed using the laws of electromagnatics and conventional wisdom says that further significant improvements in energy density or cost are impossible. On the other hand, perhaps the clue is to figure out how to avoid the limitations of those equations…Edison did it for the light bulb.

Andrew D. Todd (user link) says:

Response to Lonnie E. Holder, # 10, 11, 19, 21

Well, I understand that sizable electric motors are already “substantially efficient,” ie 90%+ at greater than 25 HP. In short, this puts them in the same general range as gear pairs. Of course, most of the competitive motor market is for much smaller motors, the kind that drive refrigerators, air conditioners, etc. A lot of the discussion of large electric motor efficiency tends not to involve motors per se, but motors in particular applications, and it tends to boil down to not doing the equivalent of running a motor against a brake, that is, fitting them with adjustable speed controllers instead.

The issues of size, weight, and cost are something else, of course.

The equations pertaining to an electric motor ultimately depend on the voltage you choose. If you look at two “best-of-class” projects, the Prius and the GM EV1, you find that they both run at 200-300 volts, the EV1 being nominally rated at 320/220 volts, and the Prius at 273 volts. At one point, it was proposed that the EV1 should be at least partially stepped up to 440 volts. Of course, increasingly stringent safety mechanisms are required as the voltage goes up, but those do not seem to present fundamental difficulties. Light rail (trolleys, subways, etc) operates in part at 660 volts, with reasonable safety, and of course, regular electric railroads run, in part, at much higher voltages. The power is stepped down to 220 volts to be fed through a flexible connection to the wheel motors, of course. When you ride a heavy commuter rail car on the Northeast corridor, the kind that has a pantograph (eg. Philadelphia’s SEPTA system), you may very well be standing within five feet or so of 25,000 volts. The “‘copper economy” of the motors of the two automobiles cited above was sufficient that it didn’t seem to be an identifiable limiting factor. The limiting factor of the EV1 was of course its batteries, the same as for every other pure electric car.

Now, as for cost, that is, generally speaking, going to be a matter of labor inputs. Simply producing things in very large quantities means that all kinds of specialized automation becomes feasible to eliminate hand labor, and the R&D, tooling, and overhead expenses can be spread out over a very large number of units. In effect, low cost is something of a self-fulfilling prophecy.

See:
Michael Shnayerson, The Car That could: The Inside Story of the GM’s Revolutionary Electric Car, 1996

Lonnie E. Holder says:

Re: Response to Lonnie E. Holder, # 10, 11, 19, 21

Andrew:

Again, good post.

Yes, higher voltage is always better in an electric drive system from an efficiency and life viewpoint, but not from a safety viewpoint. As with all things, tradeoff.

Electrically driven turf care products seem to be aimed toward 24v or 48v systems for now because of the safety related issues. Cost also remains a huge factor. In spite of multiple companies’ approaches to electric, none have sustained success to this point (excepting robotic mowers).

Will anyone crack the “ride-on electric turf machine” key? Probably, but I still think that immediate cost and battery recharge time will continue to be a factor into the forseeable future.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...