The worst thing about 'AI'

aragusea DwAPwbcq5GQ Watch on YouTube Published June 04, 2025
Transcribed
Duration
22:17
Views
209,561
Likes
11,216
3,186 words Language: en Auto-generated

Hi, I'm minor internet food celebrity Adam Reggusia. Is artificial intelligence about to make me redundant, as the Brits would say? Well, get ready to check out a chatbot cooking video script ostensibly written in the style of yours truly. It's going to be creamy and dreamy. Speaking of dreams, I've slept so bad for the last few days at this rental house that I'm in. Can't wait to get home in a few days to my mattress from Helix, sponsor of this video. At this point, I feel like my Helix and I are bonded. I'm not sure where my body ends and my Helix begins because my Helix is made for my body. Well, me and Lauren's bodies. You go to helixleep.com. You tell them how you and any bed buddies are sized and shaped, how you like to sleep, what kind of sleep problems you have, back pain, getting too hot, etc. and they recommend a premium mattress just for you. We got this hybrid spring and foam number and Helix shipped it free right to my door. Free in the US. There's no big trucks or movers involved. That is a real mattress in a box under vacuum seal. You can move it yourself. I am so spoiled by my Helix. When I travel, every mattress just makes my back hurt and I'm hot all night. Get me back to Helix country. Right now it's the summer sale. 20% off sitewide. There's a 100 night sleep trial to make sure you love it. 20% off sitewide with my link in the description. Thank you, Helix. So, I am in California this week and uh I needed to come up with an economical video to do and my mind turned to an email that I got from a self-described fan in September of 2023. The subject line, AI script for Adam Reggusia. This fan said that they were playing around with a large language model chatbot that was brand new at the time. And they entered in the following prompt. Write a script for a video of Adam Reagusia. Make sure it follows his style of video making. And then this fan sent me the output that this chatbot produced. Let me read you some highlights. Hey folks, Adam Regusia here and today we're diving into a classic comfort food recipe that's sure to make your taste buds dance with joy. That's right, we're making homemade spaghetti carbonara. It's a simple Italian dish that packs a ton of flavor. So, let's get right to it. Totally sounds like me, right? Let us continue. First, we'll get our water boiling. Always remember, we need this water to be as salty as the Mediterranean Sea. That of course is not actually true. The Med, like the rest of the global ocean, is about three times as salty as the water we generally boil pasta in. If you actually boiled your spaghetti in 4% salinity, it would be gross. Get the water as salty as the sea is a thing that people say. So, I understand how an LLM vacuumed up that sequence of words and spit it back out again. But it's not a thing that I typically say because while I think it can be close enough on the intuitive level, it is technically really wrong. Let's keep reading. In go the spaghetti noodles. Yes, of course, I always call them spaghetti noodles. Camera shows closeups of the creamy egg and cheese mixture. Adam smiling. This is the magic that'll make our carbonara creamy and dreamy. Totally sounds like me, right? Thanks for joining me in the kitchen today. If you enjoyed this recipe, don't forget to like and subscribe and be sure to leave your comments below. As you know, I always directly entreat you to like and subscribe because I believe you have no idea how to use YouTube and you require basic instruction. That sounds like me, right? Adam gives a thumbs up to the camera as it fades out. Closing credits roll with some light background music. That's how I end all of my videos, right? Okay. So, in the list of sins committed by large language models, these ridiculously uncharacteristic style choices rank rather low, I'll grant you. But I still took one look at that script a year and a half ago and I got this warm, comfortable feeling. I looked at that script and I thought, "Oh, my job is so safe. This is trash. This AI stuff is just a new kind of spam. This is paper junk mail all over again. It's another lowcost, low value, high volume information product that people will simply have to brush out of their way like spiderwebs in search of the thing that they actually want. This is annoying, but it's no threat to my job. And you know, in the ensuing year and a half, I do think that I've been proven pretty much right. LLM suck up everything we've ever written, blend it up into a kind of word emulsion, reconstitute it, and spew it back out at us in the form of a gray word paste that could be likened to the inards of a chicken McNugget. A word McNugget is what I call a chatbot output. Now, the tech bros are already down in the comments saying, "Hey, maybe generative AI can't make a good Adam Regusia video, but it can do lots of other useful things." And you know what? I am sure that that's true. I'm sure that it can do lots of other useful things. I just think that point misses the point. It's like going up to an anti-uclear activist and saying, "Hey, nuclear power has lots of legitimate uses." Well, yeah, dumbass. Nobody denies that. The argument that an anti-nuke person is making is that the apocalyptic risks of nuclear technology simply far outweigh any conceivable benefits. Now, I, Adam Regusia, am not taking a position on that particular subject right now. I'm just making a rhetorical comparison. LLMs can be useful for some things is a straw man argument. Nobody who knows what they're talking about denies that LLMs can be useful. The question is whether you stealing our creative work and selling it back to us in a much shittier form is worth the uh convenience. I'm honestly not sure what my answer to that question is. I think you can tell which way I'm leaning, but I'm trying to reserve judgment as the situation unfolds. I have thus far in my life tried to be a good soldier against ludism. Lots of the arguments that people make against AI these days sound a lot like the arguments that people used to make against like Wikipedia and Google search spellch check even or calculators. How are kids going to learn how to do math if they're just using a calculator? It's a thing people used to say. Up to this point in my life, I have very much been on teen tech. But I could easily see myself now defecting over to the uh butlerian jihad any day now. And let me tell you why. Let me tell you what I think is the very worst thing about the new so-called AI technology that would be more accurately described as a glorified autocomplete. As you may know, before I became a minor YouTube food celebrity, I was a vaguely serious, vaguely well-known journalist. I was on the journalism faculty at Mercer University in Georgia. And in the journalism discipline, we have an ultimate sin. The worst thing you can do as a journalist, the infraction for which your colleagues will punish you most harshly, the thing that you can do that will make it all but impossible for you to ever work again as a reporter is fabulism. Making up. There's other sins in journalism for sure, but flatout fabricating facts or quotes is generally regarded as the worst thing that you can do. Why is a fabricated news story necessarily more harmful than like a a really biased story or a story filled with mistakes? Honest but still potentially lielist mistakes. Is fabulism always more harmful than that? No, not necessarily. I don't think. But I do think, and this is this is my personal interpretation, I do think that the reason that journalists punish fabulism above all else is that it is so hard to prevent. It is so hard to defend against. If I'm an editor with reporters working underneath me and one of those reporters is making stuff up, it is surprisingly hard for me to catch that because in daily news, we generally don't like factcheck every single claim that a reporter makes in a story. Back in the good old days when journalism still had money, highquality periodicals would check every fact because you have more time for that in like a monthly magazine. The good magazines had fact checkers on staff who would take the reporter's story and then independently re-report it, call every source, read their quote back to them, say, "Did you really say this?" That kind of thing. Newspapers and other daily news outlets rarely had time for that kind of thing, even in the good old days. So, editors do spotchecking. If a claim of fact in the story is particularly consequential or in any way suspicious, you might double-check that one. But mostly, you have to trust your reporters to not make stuff up. In the short term, at least, it's really hard to spot a wellexecuted fabrication. Good faith errors are easier to spot because people tend to make mistakes in predictable patterns. People leave off a zero. They misspell unusually spelled names. They confuse the city government with the county government. They use a secondary source when a primary source might be available. No one knows how to use commas. People tend to fall into the same set of traps when they make honest mistakes. And as an editor, you can anticipate a lot of those mistakes and fix them before they go to print. Even slightly more mendacious problems in journalism can be anticipated and corrected. Like when a journalist massages the facts to make the story more favorable to their value system or to their point of view. That's human. That's predictable and you can correct for it somewhat. But if a reporter says, "You know, I heard this protester at this rally shout this really incendiary thing," you just have to trust that they aren't totally making that up because you can't pass through time and space to go verify a transient event like that. Outright lies are both incredibly harmful to the mission of journalism and they are very, very hard to prevent. if an unscrupulous reporter is really dedicated to their fabulism. So that's why we punish it so heavily when it gets caught or exposed to function as a deterrent to scare the next reporter who might be thinking about cooking one of their pieces. Across all of society, not just in journalism, we punish lying really severely for exactly this reason. Lying hurts us and it is very difficult to prevent or to catch in a timely manner. For society to function, we have to be able to trust each other mostly. In a reasonably healthy functioning political system, politicians may lie a little bit all the time, but they do it in predictable ways that you can account for. You just assume that they're exaggerating to favor their position. You just assume that they're not fully honest about their personal lives. You just assume that they're engaging in some light graft here and there. And yet the system still works well enough. When a politician really steps out of line with an outright lie or brazen corruption, we shame the crap out of them so that all of the other politicians fear the shaming. Shame keeps them mostly in line. But everything changes when you get a politician who has no shame. a politician who is willing to just say whatever their supporters want to hear without giving a first thought as to whether it's actually true. You get a charismatic leader with zero shame and a lot of supporters who have little to no shame themselves and there's really nothing you can do to stop the lying. Social sanction is the only real defense that we have against lying. And if you think there's no difference between a little bit of lying and a lot of lying, let me remind you of the dictim of paracelis, which we so often invoke when talking about food safety. The dose makes the poison. A lot of lying, a lot of corruption is meaningfully worse than the normal, modest level of lying and corruption that we've historically tolerated from our elected leaders. The dose makes the poison. Take the case of Putin's Russia. Corruption and deceit ran so far off the rails there that they had no idea how weak their own military was before they attempted to take Kiev. Putin really thought that was going to be a cakewalk because his army looked way more lethal on paper than it actually was because of a top-down culture of lying and corruption in Russia where procurement officers steal and sell the tank armor and then they report to their superiors. Oh yeah, we totally put that armor on the tanks. The tanks are ready to rumble. Narrator voice. The tanks were not ready to rumble. lies are actually damaging in that way. And this brings me back to chat GPT and the like. People feed the computer a zillion documents, often illegally pirated documents, written by real people. The computer looks at which words tend to follow in sequence from each other, and it produces statistically likely strings of words in response to your prompts. Obviously, these systems are more complicated than that, and they're getting more complicated every day. But on a fundamental level, I'm pretty sure that I just gave you an accurate summary of how these so-called AI products work at their core. The chatbots don't know anything. They don't think, they don't reason. They're just doing word math because computers are really just elaborate calculators. And yet these new chat bots work kind of well. They get a lot right as they puree our human writing and extrude the result back out at us for a monthly fee. The biggest problem with them is that they lie. Not always, not even most of the time, but they still lie a lot. Actually, it's less like lying and it's more like bullshitting. Remember when you were in a discussion group in school and you didn't do the reading, so you listened to what all the other students said and then when it was your time to talk, you just kind of spewed a word salad that sounded like what everyone else was saying. We have a word for that. It's bullshitting. Dude, I totally bullshitted on that answer. I hope our teacher isn't too tired and defeated to call me out on it. Large language models are machines. They do not know. They do not think. They say things that kind of sound like what people who know things would say. This is at least part of how chatbots end up hallucinating, as they call it, dreaming up completely insane claims of fact that they proclaim with great confidence and authority. A timely and relevant example would be the uh Make America Healthy Again report, a document for which the Trump administration has refused to identify the author. But the report blames pretty much all major US health problems on childhood vaccines, pesticides, ultrarocessed foods, and prescription drugs. This new report is broadly speaking hogwash and would be hogwash even if the anonymous authors hadn't written it with AI which they very obviously did as indicated by the tag o AI site in the URLs for the citations which are riddled with insane errors. The report makes claims that are directly contradicted by the studies it cites to support those claims. And at least a few of the studies cited do not exist at all. The chatbot straight up hallucinated them. People get things wrong all the time, but they tend to get things wrong in predictable ways that you can account for. Chat bots in super weird ways. And unless you already know a lot about the subject in question, it can be very hard to spot the lies. Worst of all, a chatbot feels no shame. There is no way for us to socially sanction chat GPT for lying and thus scaring straight all the other robots. One of my very favorite kind of stupid but still awesome movies is the 1975 dystopian future sports film Roller Ball with James Khan. A rad vintage movie depicting a future in which there are no governments, only oligarchic corporations that control every aspect of life. Jimmy Khan plays an aging star athlete whom the oligarchs need to destroy because he's giving the people too much hope that individual effort can make a difference in the world. Jimmy Khan's character wants to know how his world works, who leads these corporations, how they came to power, and so he goes to the future world's sole remaining library to see the books. But he soon learns there are no books there. The books have long since been digitized, dumped into an artificial intelligence named Zero. Jimmy Khan asks the Zero computer for some simple straightforward answers to some simple straightforward questions. And the computer freaks out, refuses to give any straight answers, and then it basically melts down. This is a movie from 50 years ago, and it is legit starting to feel like it could be reality 50 years from now. Now, if the clothes and the hairstyles from Rollerball become reality in 50 years, I will die the happiest 93-year-old ever in my ringer tank top and skintight leisure suit. But please, let's not dump all the books into a computer that doesn't understand what the books mean. If you want to cook with AI generated recipes, man, do that at your own risk. I screw up in my recipes all the time, but I tend to do it in predictable ways. I confuse tablespoons with teaspoons. I screw up the metric conversions always, but the folks in the top comments usually spot those problems even if I don't because they know me. I have been contacted by multiple tech startups that say they want to use my stuff to train AI recipe generators. And to these people, I always say two things. One, thank you for asking instead of just doing it illegally. And two, I hope your entire industry is consumed by the flames of its own making. Like I always say at the end of my videos, keep it creamy and dreamy.

Summary not available

Annotations not available