3 Comments
User's avatar
Ronald D Stauffer's avatar

Ron,

I'm not well versed on this proposed AI legislation. And maybe what it's represented to be is not what it is at all? Kind of like a "Protecting Children" law which really does anything but protect children, but DOES restrict our liberties...

But assuming that some of the proposed legislation is aimed at letting users know whether they are looking at an AI generated product or one made by human hands...

Allow me please to channel my inner John Stossel...

What's wrong with a law that tells the consumer what the ingredients are in the food that he is eating, or in the information that he is consuming? Personally, when I consume a YouTube video, or other digital product or image, I'd like to know if it actually exists in the real world. I want to know if the butter I'm eating contains things that are not real milk and cream. I'd also want to know if the video clip of the Vice President's speech is real, or made up by a hacker with a political axe to grind.

I'm not afraid of AI products, just as long as I know it's AI. Help me understand how this is harmful.

Expand full comment
Ron Stauffer's avatar

That’s a perfectly fair question. The idea of labeling content as “AI-generated” sounds reasonable, and I don’t have a problem with that at all in principle. Many platforms already offer self-labeling AI-generated content as an option (or a "requirement," though it's nearly impossible to enforce).

But the devil’s in the details... and that’s not really the issue at hand. It’s the way California’s laws were written. SB 942 and AB 853 go far beyond labeling and actually make platforms legally responsible for policing and verifying everything users upload, with penalties if they don’t. (Dean Ball’s article in Pirate Wires is worth a read—he digs into those specifics.)

What I’m trying to do here is explain what happens once laws like that start multiplying nationwide.

The “discrimination in employment” angle is especially concerning and potentially very harmful. If an employer gets sued because a plaintiff claims the company “used AI” in hiring or firing decisions, how exactly can that employer ever prove they didn’t? It flips the burden of proof to the wrong party, *assuming guilt* unless they can somehow prove innocence. That’s dangerous, backward, and could have a chilling effect on hiring, firing, and countless other business decisions.

And beyond that, how do you even define “AI”? Grammarly uses it. So do other spell-check tools, Google Docs, predictive text, and even Substack’s own formatting tools. Does every piece of writing that uses those features now count as “AI-generated”? Once you start chasing that definition legally, the whole thing collapses under its own weight.

Should we be required to stamp "this uses AI" on every video, every photograph, every article, every PDF, and every social media post now, upon pain of fines and legal harassment?

If so, what's the point? We could just add one single line to all Terms of Service agreements: "We use AI here," rendering the whole thing meaningless.

Expand full comment