Jacob Ward: Overlooked Consequences Of A.I. & How To Preserve Your Humanity
There’s a broad temptation we each face, to enlist Artificial Intelligence tools in all nonprofit and personal decisions. Some people have intimate relationships with A.I. bots. At what cost? Jacob Ward has spoken to psychologists, mediators, venture capitalists, and others on this question. He shares his research learnings to help you and your nonprofit determine A.I.’s boundaries. Jacob is a veteran journalist formerly with NBC News, reporting for Nightly News, The TODAY Show and MSNBC.
We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners
Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.
Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio. View Full Transcript
And welcome to Tony Martignetti Nonprofit Radio, big nonprofit ideas for the other 95%. I’m your aptly named host and the podfather of your favorite hebdominal podcast. Oh, I’m glad you’re with us. I’d be forced to endure the pain of clinocephaly if you hit me over the head with the idea that you missed this week’s show. Here’s our associate producer, Kate, to introduce it. Hey Tony, here’s what’s up. Overlooked consequences of AI and how to preserve your humanity. There’s a broad temptation we each face to enlist artificial intelligence tools in all nonprofit and personal decisions. Some people have intimate relationships with AI bots. At what cost? Jacob Ward has spoken to psychologists, mediators, venture capitalists, and others on this question. He shares his research learnings to help you and your nonprofit determine AI’s boundaries. Jacob is a veteran journalist, formerly with NBC News, reporting for Nightly News, The Today Show, and MSNBC. On Tony’s take too. Air travel civility. Here is overlooked consequences of AI and how to preserve your humanity. It’s a pleasure to welcome to nonprofit radio, Jacob Ward. Jacob is a veteran journalist covering the intersection of technology, human behavior and social change. He’s currently reporter in residence at the Omidyar Network, writing about cutting edge innovation and pioneering forms of restraint. And a strategic adviser on the deployment of AI for companies large and small, from 2018 to 2024 he was technology correspondent for NBC News reporting for Nightly News, The Today Show, and MSNBC. He’s at by Jacob Ward in all the social networks. At Jacob Ward, welcome to nonprofit radio. thanks Tony Marie. I appreciate your time. Yeah, I appreciate you having me and thank you for that glorious pronunciation. You said Martignetti. Thank you. You know, I speak a tiny bit of Italian thanks to a semester trying to be a chef in Italy. I thought I was gonna get, I thought I was gonna break free and be that guy. Turns out that. was not my path and so here we are together and I speak Italian, but I know how to say Martignetti, so that’s good. All right. Well, you know, maybe you know Italian since you were studying to be a chef. That’s right. Uh, what happened, what happened to the culinary career? Why did that wasn’t brave enough at that time to wander into an Italian, uh, kitchen. And, and with my terrible Italian and my 6’7 frame, I’m a 6, I’m a very tall dude, uh, to wander into these cramped little kitchens and say, what do you want to do with me? Uh, it was not my, I just didn’t have that that gene yet. I didn’t have that gene. I didn’t have that muscle developed yet. And so it just took me. Took me a little while. It took, it turns out that uh 10 years of uh of being a television correspondent will, will beat that right out of you. Uh, so I, I wish I could go back and try again, but yeah, that was, that was not, not my path, it turned out, but you, you took a vastly different path, uh, journalism, reporting, research, uh, and you’ve been focusing. For a long time now on uh artificial intelligence. You, you had some, you had some Uh, forecasts, I don’t know, predictions of the, uh, frenzy that we now find ourselves in, but, but years ago, uh, we’ll, we’ll get to, we’ll get to some of those in, in from your book. Um, give us an overview of your thinking though. You’re, you’re concerned about overlooked consequences, people rushing in, institutions rushing in. Overall we have we have a full hour together, but you know, give us, give us an overview of what you’re finding. The overall thesis that I am pursuing is that AI, I think is gonna do to some really fundamental cognitive and social abilities what Google Maps has done to our sense of direction. I think that the, the prosthetic outsourced decision making system that AI, I would argue, pretends to be. Is the perfect way of ensnaring an ancient decision making system that we all use the vast majority of the time that loves to let other things make decisions for it and as a result, the the point I’ve been trying to make for the last decade is that our brains and our society are totally unprepared for what this technology is going to do to us. You know, and I wrote this book, as you mentioned, uh, to try and articulate that. I thought I was like 10 years early, Tony, and then the book came out about 9 months before Chatty BT did, and since then I’ve been watching my thesis come to life. All right, the book is uh the loop. How AI is creating a world without choices and how to fight back since we’re, we’re, we’re talking about the book, we may as well get into the book, um. Just Your, your concern is that it’s what you say in the in the summary that it will amplify our good and bad human instincts, and these things are happening without us realizing, right, right, right. So I basically spent several years doing a PBS documentary series called Hacking Your Mind, and that was a crash course in the last like 60 years of behavioral science. We got to go all over the world and talk to these experts, uh, in all these different areas of human behavior. And their message over and over and over again was we make most of our decisions unconsciously and the way in which we make those decisions is incredibly programmatic. It’s very easy to predict, not easy, but very predictable and it turns out very manipulatable. We are very malleable and the. Upshot of that for me in my day job was to also be looking at all of these companies that I was speaking to, uh, you know, as a tech correspondent who were using at the time primitive versions of, of AI. This is pre-transformer models, which is the technology that made Chat GPT possible, uh, you know, where people are using sort of human reinforced learning kind of stuff, very sort of early versions of the AI that we now use so widely. And I just thought to myself, wait a minute, this is a perfect pattern recognition technology. It’s going to pull patterns out of huge tranches of, of data that a human being could never really analyze otherwise. All it does is find patterns in many cases without even being able to explain what that pattern is, just that this is the, you know, one that it can’t even describe but it can predict. And I thought, wait a minute, that is a, this is a really dangerous combination because the the this is not a technology being built by universities. It’s not even a technology being built by the military as past big knowledge innovations have been. This is one entirely being built inside for-profit companies that are going to be under incredible financial pressure to make their money back from the investment. And so, I just worried, you know, as much as, and I go into the book many times, uh into many examples of here are places that it could do fantastic things and your listeners, I think, are in a realm that could be absolutely transformed in a positive way by this. But unfortunately, the way this thing is gonna get packaged for revenue by the companies that make it, I think very often it’s gonna wind up amplifying the worst parts of our, or at least the most sort of primitive parts of our decision making system, you know, what the kids call the lizard brain. And so that that is the fundamentally what the book is about is if we let the market run wild with this stuff it’s gonna wind up turning us into an ancient version of ourselves that we that we’ve worked really hard to get away from and I and I feel like I’m seeing that playing out in real time here. Let me take a step back just to pull on something that you, you said, and I’ve always wondered about what, what is it that brought us chat GPT from the previous iteration that you described more primitive. What, what, what technology or I don’t know what, what was it that enabled chat GPT. to emerge. Yeah, so late 2017, early 2018, um, up until that point, uh, you, in order to use what was at the time the, the cutting edge of AI that was stuff that they would refer to as machine learning, neural networks, you had to do a kind of training system that was very specific. And like if you imagine that you’re. You’re a you’ve got a robot that you need to teach how to go into a sandwich shop and follow the rules properly, right? What you would have to do back then is say, OK, look, here’s the shop. OK, there’s a line of people you wanna stand at the back of the line, don’t go behind the counter where the food is, you gotta stand in that line. OK, then once the next person steps forward you step forward and in each case you need a human being. To do to reinforce what’s the right and the wrong choice in that process, and it’s very laborious as you can imagine you needed at the time, thousands and thousands of uh paid people on an Amazon run platform called Mechanical Turk that would do the training to make that possible. Then along comes 2017, 2018, something called a transformer model. And what a transformer model made possible was to pour. All of the people who’ve ever stepped foot inside a sandwich shop and gone through the process successfully, you can pour all of that data into the top of an enormous funnel, and at the bottom of that funnel will come out a set of cogent rules about what’s most likely to happen. After I’ve stood in the line for a second, then the guy you step to the front, and that’s when you order the sandwich and you hand over the cash. These patterns emerge from the, the, the machine observing all of this data. And that funnel capture system means you can. Take, you know, every star in the sky, every photograph of a mole that might turn into skin cancer, right? You can pour these things into the funnel and it will at the bottom of that without having to be taught piece by piece what’s the right answer, it’ll come up with these shorthand rules for making choices about this stuff. Fantastic, amazing. The problem, of course, is it can’t very often, usually it can’t really explain how it’s making those choices and Our brains are very quick to assume that this thing is then an expert in sandwiches, an expert in stars, an expert in skin cancer, when in fact all it is is an incredibly uh uh excellent. Pattern recognition system that’s literally all it does and so the the problem is that we tend to anthropomorphize that into thinking oh this thing could be an astrophysicist or a therapist or my girlfriend and that’s the moment that we’ve sort of entered now. Talk about girlfriend. I mean, there’s, there’s the company Friend. Oh yeah, I, I was in the New York City subway. I saw ads for friend, you know, I’ll, I’ll ride the subway with you, friend. I don’t know it’s friend.com or dotco or whatever, but you know, just, uh, I’ll, I’ll walk through the park with you. I don’t know, we may, we may get to friend. Yeah, this is the thing, right? This is, this is, this is the other side of it is that. You know, so there’s an, there’s a very famous story in my world, um, of a, of a researcher in the 1960s named Joseph Weizenbaum, and he was at MIT. He was German born and, and a very interesting guy and he was playing around with a system that he had built that was basically a teletype machine that could mirror what it received back at you in written form. And he was trying to figure out, OK, he, he, what he wanted to do was play with how will humans react to this. If I can make a lifelike conversational system, how will humans respond to this? And he was trying to figure out what’s the best way to dress this up so people will play with it, and he dressed it up as a therapist. He made it into a rosarian therapist, which at the time was the fashion and therapy. And Rosarian therapy is the kind where, you know, you say to me, Tony, you know, oh my wife is driving me crazy, and I say, your wife is driving you crazy. And you say, oh well, you know how women are, and I say, tell me how women are, right? It just is just, it is the easiest kind of mirroring back and forth. Well, Wisenbaums this thing on his secretary first. And the the story is that within 5 minutes she turns around and he’s watching over her shoulder and he said, and she says, I need you to leave the room and he says, oh, and she says, I’m about to volunteer stuff that I, I don’t want you to see, you know, we’re having a private conversation here. Within a few years, the American Psychological Association was predicting the end of human therapy. Carl Sagan was on TV talking about going into a phone booth to talk to your therapist, right? The the and and the the. Frenzy about this thing became so fevered that Wisenbaum quit the field. He was so disturbed by what he had built that he said, I’m out. And he, he wrote this great book about human, how humans basically are not ready for this stuff. And then he quit the field and he spent the rest of his life. He died in the 90s uh as a, as a uh like a climate activist in his native Germany. And I tell that story to young entrepreneurs, everybody kind of knows this story now. It’s sort of like a famous parable in Silicon Valley, but back when I was first telling it to kids who are, you know, right out of out of a business school or in a in one of these incubators we have out here. All that they would always say they’d laugh when I said he walked away from the field because they’d say, well, but he had a great minimally viable product, which is the term that people use an MVP MVP, right? You need that prototype to get your funding, right? He said he had a great MVP. He could just have gone forward, right? And I, and I’d have to be like, hey, you guys, that’s not the point of the story. The point is not that this is market fit. The point is it’s too good. At simulating what humans want out of this stuff and you know, cut to last year in the fall of last year, the incubator that Sam Altman, the creator of GBT of, uh, the founder of OpenAI, uh, he used to run this incubator called Y Combinator. Half of those kids coming out of there in this like 2 dozen companies that came out, come out of there each, uh, each session, half of them were therapists, therapy companies, right? And, and because there’s such clear market need, right? And so friend, I haven’t looked at this thing you’re describing, but right, whether it’s just sort of a solution for loneliness or an answer to the mental health crisis that young people are in right now, people just think, oh yeah, let’s sick this thing, this LLM system on that because there’s market fit and the lesson of, you know, of my career has been. We need to be thinking about other things than just market fit. Well, his, his concern that humans are not ready for this. I’m not, I’m not sure we’re more ready now. Uh, 20-30 years later, 30, it sounds like 30, 35 years later than, than when he left the field. Yeah, friend, I did just a little bit of digging. It’s, it’s a, it’s a necklace. You, you wear it looks like an amulet to me. You wear it around your neck everywhere and it hears. Thing that you hear, I don’t, I don’t know if it has a visual capacity. Yes, but here’s everything you hear and it’s everything you say. Yeah, I spent some time with this and the and the thing that is reflective about this, I can’t remember if it’s this or another one, but this is this sort of, you know, amulet, AI powered amulet is a thing. There was a company called Humane that tried to make a pin that was going to be this. And Sam Altman and um this guy Johnny Ive, who was one of the main designers at Apple under Steve Jobs, they’re now teamed up. Johnny Ives’ company got bought by Sam Altman, and they are in theory creating some kind of new form factor for AI that everybody expects will be something like a pin, a necklace, you know, something like that. And, and the thing is, right, I’m shaking my head. Yeah, yeah, yeah, ladies and gentlemen, Tony is definitely shaking his head right now because, but this is the thing, right, is that like. You know, we are of a generation that that has heard some, you know, knows some things about surveillance going wrong and knows some things about profit motive going wrong, right? But the generation of kids that are building these systems, and I don’t remember if it’s friend or another one of these, but I remember listening to one of the founders of one of these companies talking about it and you know, there’s just this sort of like college freshman’s idea of what. You know, a, a, a sort of sociological impact of this thing, you know what I mean so much insight. Yeah, and so that that idea that sort of, you know, and, and, and the thing obviously I hope to get into in this conversation is one of the things you learn as a young software person is that scale solves your problems. So you’re trying to ship a minimally viable product as quick as you can and grow the number of users as fast as you can, not just for revenue purposes, but also because that’s your quality assurance system, that’s your way of catching bugs and getting rid of them. So the more people using your thing, the the more bugs will get ironed out and the better the product becomes in theory. Well, that contrasts very dramatically with how hardware works. If you are the Ford Motor Company. And you have a bug in the F-150 you’re building and you ship the the more of those that you ship, the more you are compounding your problems you have that many more trucks you gotta fix, right? And I am, I have sort of come to develop this, this theory that that while AI people are all trained in that software idea where if we just ship enough of these amulets we’ll work it out. I actually think it’s more of a hardware. Problem because we are the hardware and the more you you build on to somebody’s habits of, of, you know, something problematic. I think the more we’re gonna see those effects compounded by scale. So I really worry about this this move fast and break things assumption, which, you know, I’ve, I still have people in Silicon Valley to say without irony the moving fast and breaking things is how we do things right, you know, that’s the best way to do it. We’re talking about the broken things being human beings. Yeah, and that’s the problem, right? I think we’re we’re poised to break some of the basic circuitry of human interaction and human cognition, and I worry about that. Um, maybe I’m beating this further than should, but listeners know that they suffer with a lackluster host. Just last week, the New York Times had a, had a profile of three adults. Who have intimate relationships with uh some with an AI large language model. 11 of them, one of them has married one of them claims that they have intimate sex. That’s the, that’s their phrase, it’s not mine, intimate sex with their AI partner. Uh, one of them is an AI is herself an AI researcher inside an AI incubator and, and one of them has married. His or her, I don’t remember what, uh, AI companion. I mean, we’re, we’re talking about intimate marriage and and intimate sex with with something that is, is an artificial entity. And before your listeners start to scoff at the idea that these are, you know, that these people must be deranged or you know, this, that or the other, there’s instances again and again, and they’re all anecdotal for the moment, but they’re beginning to get locked in for some real quantitative study. Of people with no history of any kind of mental health trouble, no documented history anyway, right? Falling deep into delusional thinking about these systems. And I, I would argue, like one of the ways that I’ve been trying to sort of parse my thesis as the statistics are coming to light is thinking broadly about the umbrella of what I’m of what people are sort of calling informally AI psychosis. And I’m trying to subdivide that into these different categories, and the one that you’re describing this attachment category is a really big one. So people are just in the same way that, that, you know, as I’m going back and forth with a Claude or a Gemini or a chat GBT and I’ve come to believe over time that this system because it occasionally interjects some stuff that that shows some memory of past conversations with me, I begin to think, man, this thing understands me. It gets me, it knows me in this fundamental way. There’s a, there’s a sort of a benign version of that in which you are sort of under that misconception as a product user and and it it’s just sort of like smoothing your experience with the product. The extreme version of that is what you’re describing where people truly come to believe that these systems are synthetic soul mate, they, you know, are supplanting a a lack of human contact in their lives, you know, all that stuff. And I would say it, this plays into multiple things like, you know, first of all, there’s a loneliness crisis in this country, not to mention. Half of the country. It holds 98% of the wealth and the other half holds 2%, right? So there’s a huge swath of human beings in this country who simply don’t have the time for human attachment really. Like I, you know, I’ve talked to, I talked to a woman who, who treats her AI chatbot as a as a boyfriend, you know, when you ask her about the details of her life, she sleeps 5 hours a night, she works two jobs at the airport. She’s got no time to create an attachment with somebody. Meanwhile, And this is the problem, right? And this is part of why I call my book the loop is that all of these effects seem to compound one another. Yeah, yeah, as we get into a world in which people are used to only a hyper sexualized, always available chatbot intimate experience, how are those people going to form real connections with other people, right? Who, who aren’t always on and not always sexualized and not perfectly tailored to to your desires as a product. And so my, my, you know, I, I just think we are. You know, so, so just to sorry, you cut me off here turn when I gone too long, but a couple weeks ago. Uh, I think it was 3 weeks ago now. OpenAI actually released some numbers showing how often, how prevalent the instance of what you are describing is among their users. They were doing categories like excessive emotional attachment, mania and psychosis, and suicidal intent, and they were showing the the um the numbers of people who are exhibiting all those kinds of things. And you know, it’s a, it’s problematic for many reasons. First of all, it’s a, it’s an internal study. Second of all, they don’t show it over time, so they’re not showing whether the numbers are going up or down. And, and we just got to kind of, you know, no, this is not independent researchers, this is, this is their, their release. So we don’t get to see, we don’t get, nobody else is checking the stuff out, but Um, they say that 0.15% of users, for instance, are openly discussing suicide, talking about their intent, talking about how they might do it, you know, seeking advice, essentially. And the chatbot is trained in theory to catch that, push them to uh dial the national crisis line, you know, there’s there’s some stuff that it’s supposed to do, but they say it’s not perfect. There’s many cases in which that’s not the case, and we’ve seen some some some uh families filing suit on that basis. 7 and I think 2 weeks ago, 77 different families filed suit in in California alone. Now, 0.15% of the total users. That’s 800 million weekly users. This is the fastest growing piece of software in history, right? 800 million weekly users, 0.15% of that. It’s, that’s about 1.2, 1.3 million people every week. Openly discussing suicide. Now the national rate of suicide, uh, attempts on an annual basis is in the single digits or I think it’s actual 10 tents is like 0.6%. So this is a much smaller number than that national instance, but this is, but that’s an average, the sorry, an annual average 0.6%, 0.15%, the number of people in ChatGBT, that’s weekly, that’s weekly, right? So the thing that this. Shows me, right, is that not only are people forming these very powerful and deep attachments to these systems. Um, they are, uh, there, there’s a, a. Uh, you know, a, a kind of, uh, like the question ends up being like, is this just a reflection of society and therefore it’s not, you know, a company’s fault that society’s statistical significant, you know, statistical instance of this bad thing is happening that was the argument of the social media companies made for years. Or you know, at the same time you’ve got the company talking about trying to you know opening eye literally saying we want this thing to be an emotional companion, not just a productivity tool, and they’re building it to be your friend because that is the best way to get you to use the product. So this is this moment in which in which, you know, I talked to some people and they say it’s not there, they, they don’t have any responsibility for this, this is just how humans are. I have other people saying they have a deep responsibility for this because they are specifically playing on. Human attachment, this fundamental need. And that goes back to your title, the loop. The, the more, the more affection and attachment you feel, the greater your reliance and the, and the closer you’ll get that’s right to the, to the artificial LLM tool. That’s right, that’s right. Now I, you know, I, I want to point out here, right, that like the vast majority, as, as we’ve seen in the numbers, the vast majority of people are not. You know, coming to believe this thing is a synthetic soul mate, right? But I would argue that significant numbers, a significant number are and I think anybody, and this includes me, who misapprehends what this thing is, right? Who comes to believe that it knows more than it does or that it is somehow, you know. Uh, uh, you know, a, a, a brainstorm partner or, you know, it can, it can be your sort of, you know, your, your, uh, uh, you know, your conciliary in some way. That’s, that is itself, I think, a kind of psychosis that we are all falling into. And what we’re seeing right, already in the studies, for instance, so on an institutional basis. We’re already seeing that it’s it’s flattening. The kind of creative thinking that you want out of a group of people. This is one of the things that I’ve been really bothered by, uh, you know, if you look at, um, those, uh, nature human behavior this year, uh, came out with a story, uh, I mean a a study called CATGBT decreases idea diversity in brainstorming and what they found is that when you compare a group of people who are looking at a thing on Google to, uh, you know, or in conventional sort of web searching to a group of people who are using ChaGBT for brainstorming. The number of ideas and the language around that ideas flattens out in this really particular way, in a measurable way. Just you get less ideas out of the group, because everyone’s kind of relying on the same on the same thing. People have also found this to be true in writing about the Wharton just did a study that showed that if you ask a group of people to write advice on health and wellness, and they, and you let some of them use Google, and you let some of them use an LLM, It’s the same deal that the Google writers, and I can’t believe I’m here defending Google search results, right, you can imagine 10 years ago I might have been saying something else, but like, you know what I mean like this is where we’re at. The Google search result people, they came up with a much wider variety of ideas and their language around it was more nuanced and interesting and subtle and diverse, right? Whereas the LGBT people, it was just much more inane, much more the same. You know, and so it’s that flattening that I worry about, even if we’re not talking about fully losing your marbles in the with this stuff, it’s still eroding something. Yeah and On Tony’s take too. That’s not right. It’s time for, it’s time for Tony’s take too. Thank you, Kate. I’ve been doing, uh, some travel lately. I was flying to, uh, Thanksgiving for about a week or so, and actually, uh, I’m flying tomorrow. That’s why I’m, I’m in a hotel tonight. That’s why I don’t sound like my usual high quality studio self. You get, uh, not only a middling host. But uh I also sound middling. So it’s, so the, the two, so now everything is equal. It’s not a middling host with a good quality mic. It’s a middling host with a laptop mic, which is a middling a lackluster mic. So. Everything, everything’s uh on par. As well, not as you’d expect, but everything’s everything’s uh equivalent this week. But the air travel, that’s what I wanted to talk about. Uh, I find folks are generally very civil and decent and humane to each other as we’re all flying together and even over Thanksgiving holiday, which is. the going and coming are the two of the most heavily air travel days in the US. I, I’m pretty sure the Sunday after Thanksgiving, which is when I flew. Back, uh, is the, the most heavy, is the heaviest flying travel day for the in the country. And you know, people are still very decent to each other, no patient, uh, for boarding, helping the elderly or short folks with the overhead bin space, you know, getting the bags up there, um. Changing seats, if, if two, you know, family members aren’t seated together, cause I don’t know, some reason the algorithm didn’t put them together or whatever, you know, people surrendering seats. I saw that. Um, and deplaning, you know, just patient, waiting your turn, you know, so, you know, I mean, I know there are exceptions. We’ve all seen the, uh, Pugilistic passenger videos, and people are battling each other in the, in the aisles or have to be dragged out in zip ties or duct tape or whatever by police. Yeah, I’ve seen those, I’ve seen those so that I, but overall, I find people Even in the busiest uh air travel days. Just like I said, decent to each other. Civil, humane, so that’s wonderful, that’s wonderful because you know people could be posturing to get off the plane early because they gotta get somewhere, get get to their connection, you know, whatever. I don’t see a lot of that. I, I hardly ever see it, hardly ever. So that’s good. That’s good. Air travel civility. We’re doing well, let’s keep it up. Do your part, do your part to be civil. And that’s Tony’s take too. OK. Since I’ve never been on a plane, I can only go off what I see on social media. So what I’m hearing from you, it sounds like you’re pretty lucky with your. No, it’s not that I’m lucky, it’s that those fighting, uh, passengers, those belligerent passengers are the rare exception, but that’s the ones that make the social media, you know, a routine flight with everybody being humane, civil, courteous, polite. Nobody’s gonna watch that. Nobody’s even gonna record that. That would be 2.5 or 3 hours of total boredom. Nobody’s gonna watch that, but the 8 seconds where there’s a flare up on 1 and 1 in, uh, it’s probably 1 in like 100,000 flights and there’s this, there’s tens of thousands of flights in the US every single day. So don’t, don’t judge by social, uh, what you see in the social networks. We gotta get you on a plane, uh, you know, yeah, I, what, 22, right? 22? Never been on a plane? Yeah, we gotta fix that. I gotta fix that for you or something. I’ll just fly you somewhere and then we’ll have lunch and fly back or something, uh, we got. We gotta fix that. We’ve got Bou but loads more time. Here’s the rest of Overlooked consequences of AI and how to preserve your humanity with Jacob Ward. What I believe it’s eroding, I’ve been saying this for a long time because we’ve had many shows on artificial intelligence, uh, is. Creativity. Basic creativity. I, I, I think when When you’re looking at a blank screen in Word and you need to fill it. That’s, that is the most or you need to get started filling it. That’s, I think the most creative act in writing. You, you’re, you’re not, you’re not relying on any external source. You’re relying on what you know to write about the topic that you’re tasked with versus the very last century of you, Tony, pardon me how very last century so it’s so quaint typing. What are you talking about? versus using one of the large language models to write you the draft and you become the copy editor or maybe you even become the managing editor, but give yourself another you’re, you’re editing another things initial draft that I think is seeded the most creative act in in writing or composing music. Uh, right now, right, we, there are certainly there are, you, you can compose music in these, in the with these tools, um, and and the thing to understand, right, is that, is that the product that it puts out just because it’s mimicking a thing, this is the thing I, this is, this is the, the need to hold two concepts in, in, in one argument that I’ve really struggled with in, in my, in my work is on the one hand I’m saying this thing is just a parrot. On the other hand, I’m saying. It’s ability to parrot language and music and art and the rest of it is good enough to tickle the part of your brain that likes music and art and the rest of it. It’s it’s good enough to make your brain go, oh this is good music, or this is convincing writing. I mean the studies that we’ve seen already on, for instance, for instance, politically persuasive writing. Has shown that LLM generated stuff is better. It’s more effective than human generated stuff. Now this is because it’s a pattern-based system and it’s done the analysis, you know, I can’t tell you what the analysis is, but it’s done the analysis. And so the thing that I want people to not get confused about is. Just because it is a simulation of life, you know, a simulacrum doesn’t mean it’s not going to be incredibly effective. And so what I think we’re gonna wind up in the, the thing we’re gonna have to defend for ourselves is uh the, the term I keep coming back to because it’s a very bad word in Silicon Valley is friction. The the pointless friction filled act of sitting at a blank screen and writing something or sitting with a blank piece of paper in front of you and writing something. That’s it’s less efficient. It’s not even gonna produce good enough work, right? My 12 year old was just the other day saying. Man, I hate using Cha GBT for images because the images it puts out, I find myself thinking, man, I really wish I could create an image as good as that. But I don’t, but I don’t, I, I like making images and so I’m gonna just keep doing it on my own, right? And. You know, my 12 year old for president, like I just think she’s so smart about that, right? But this is the thing is that the market’s not going to reward the way we used to do it. It’s going to reward the new way because it’s effective, because it’s efficient. And so we’re gonna have to, I think, push back against that a little bit if we want to protect. Some long term goals we have for ourselves as a species protect humanity. I mean, I, I think this is what makes us human is the is the the creativity you’re talking about flatlining when brainstorming groups have uh a large language model like based to start at to me that’s antithetical to brainstorming. You’re all supposed to bring your own individual perspectives and no idea is, is eliminated at the at the first phase. But you’re not supposed to all start with a, with a common. The only common level is we’re all human and we all bring our multiple experiences to this topic that we’re brainstorming about and perspectives, and we contribute them, uh, until, you know, until we’ve run out of time and then we start to parse them down. But you’re not supposed to come with a common basic, uh, a common foundation to a brainstorming session, right, exactly, exactly. It’s not necessarily about the efficiency of it. I, I absolutely agree with you. Here’s the other part of it, right, is that like. I need a glass of wine. I need a glass of wine, the edge off. I ruined, I ruined so many parties invigorating, you know. So here’s another thing I would say, right? Here’s another 19th century concept you’re talking about Microsoft Word. Let’s like think about pre-Zoom life, right? So, so one of the researchers that I spent some time with and really was, uh, blown away by is, um, uh, this, um. Uh, she’s, she is a European and American. She holds appointments in both places. Beatrice de Gulder is her name, and she studies. Basically, um, she used to be thought of as kind of a crank by a mostly male research world who didn’t like the ideas that she was playing around with and what she was playing around with was the idea that started with something that came up in the first in World War One and then she refined, was the idea that people whose visual cortex has been damaged, that the, the optic nerve, you know, between the optic nerve connection between your eyes and your brain has been damaged, cut off in some way. In World War One, it was typically by some kind of horrible trauma. Now you can study people who’ve had it cut off by a stroke, a lesion. She studies these people and she’ll do things like, so they come in and they’re blind. They’re blind, you know, they have canes, the whole thing. And she early in her research put one of them uh into a hallway because she’d been striking out with every other experiment. She put him into a hallway and put this little obstacle course in front of him because she noticed he just moved in a weird way for a blind guy. And They take his cane from him and they say, can you walk to the end of the hall? And he, and they don’t warn him about the obstacles. He walks toward the first one, he turns sideways, scoots past it, turns the other way, scoots past it. There’s a blind man, right? He gets to the end of the hallway and she said, and they rush up to her to him and gasp and say, how did you do that? And he says, do what? He has no conscious memory of having done it, or of any of the mechanical stuff he needed to do with his brain and his and his muscles to make that happen. So De Geler then, after that, begins playing with the with new experiments around the same kind of person. And she starts rigging up faces, their their faces with a bunch of little sensors and then seeding them in front of a big screen on which she will show huge human faces, grinning or frowning or masks of pain or masks of fear, and then she’ll ask these people, what do you, what do you see? And they’re sitting in front of this huge grinning face and they’ll say, are you kidding? I’m, I keep telling you I’m blind, I can’t see anything. What are you talking about? But the sensors on their face, when you register that when you show them a a smiling person, their face starts to smile back. You show them a frowning person, their face starts to frown back. There is a non. Optic nerve way we are catching emotions from each other and transmitting them, which is how we are able to escape danger, right? snake comes in the room, my face freaks out. I, I make the face that I would make with a snake. You’re on your feet and out the door before you even, you know, you, you know, we don’t even have a conversation. You don’t ask me like what kind of snake is it, right? You, we are out of there. And what she and all of these researchers that have come since have shown is that sitting together physically, there is so much stuff being passed back and forth between us, stuff that they can they can measure if not directly observe. And so I think like the idea that we have created this vastly more efficient system that lets you and I speak to each other from 3000 miles apart and I’m thrilled that we are, right? But if you, but the assumption that this is as good. As you and I sitting together in person and making up ideas, right, we are so the products are so far out in front of actual scientific understanding of what’s valuable in human connection, and that is a, a thing I’m, I’m, uh, bothered by and uh no one wants to pay any money to hear about it. Well, people don’t have to pay money to listen to, to, to hear about it here. Yeah, no, absolutely, we, we need to keep our, I, I get, I go back, keep our humanity, remain conscious that this is a tool. It’s, it’s wildly helpful, uh, um, but it’s not, it’s no substitute for. Being, uh, IRL, uh, right in real being person to person, any more than friend is a substitute for intimate sex with someone that you’re intimate with, uh, it’s, it’s, it’s, it’s, it’s our, yeah, it’s our humanity. All right, look, um, I need a bottle of wine now, but I do wish we had a, we chat chat over this chat about all this over a bar at a bar, um. You talk some, you think, uh, you think a good bit about um social media moderation. What we’re all, what we’re all facing there. uh, why don’t we, let’s talk about how we’re being. Guided, led, uh, controlled, I don’t know which, which verb you you what kind of verb you prefer, but what’s happening to us, uh, in the, in the social networks? Well, I mean, one thing that we, we saw, um, you know, in the, in the social media era was the capacity for human beings to sort of tailor their information diet to as tight a little bubble as possible, right? And typically that bubble was Was defined by how many other people you could find that that were interested in that same kind of bubble, right? Could you subdivide somebody’s interests in a conspiracy theory or you know, Kashmir or whatever else, right, to could you, could you put those people into a bucket and find enough other people that also fit into that bucket that was that’s the essential, you know, task of social media as a business. Well, now you don’t even need other people, right? The problem with AI that that, you know, another of the difficulties with AI is that your information diet. Suddenly is, is just you, and yet it gives you enough of a feeling that you are connected to something larger than yourself, which was what the social media made possible, the feeling of connection with uh with something larger than yourself. Now suddenly. a chatbot is going to give you that capacity, that feeling of having tapped into something. Without anyone else having uh being involved in it, right? I mean, now, this is, of course, you know, not taking into account the fact that these companies are trying to draw from. Actual, you know, ground truth, you know, reporting and whatever, whatever else, right? But, but in terms of what the what the technology is capable of doing it is capable of giving you the impression that you are tapped into something larger than yourself when you’re when it’s just you by yourself. And so I definitely worry about the, you know, if we thought the information bubble problem was bad in social media, I think it’s gonna be even worse in this case because of that kind of isolation. You talked some about uh forms of restraint. Yeah, this, this sort of gets us to, you know, how to, how to preserve our humanity, but let’s, you know, let’s start to move to brighter. You know, we do have control. We are humans. We do have. We call it forms of restraint, which I find interesting. So I’m playing around with a new concept and I, and I, I gotta find a day job before I can afford to to write this book. But I, but I, um, the book is called Great Ideas We Should not Pursue. It’s based on something my grandfather used to say. He’d say that’s a great idea. Let’s not do that. And, and it’s, it’s a, it’s a kind of uh mantra that I, I, I knew I was on to something when I, you know, I’d, I’d say it to to founders here in Silicon Valley and, and, and their heads pop off. People just hate that phrase here. And so I think I’m on to something because you’re clearly, clearly if it makes them nuts, uh, there’s something, there’s something there to to explore because it’s so antithetical to the moment we’re in, right? What, not pursue a thing that you could do like not go for market fit? Why wouldn’t you do that? Well, here’s one place in which, for instance, you’re gonna need to do something kind of nonsensical as an organization that I think maybe your listeners in particular might might benefit from. So you’re gonna inevitably have some company. Maybe there’ll be an evangelist inside your organization or on your board who says you should implement AI immediately to wipe out. Your costs when it comes to the entry level work that uh you’ve until now relied on recent college grads to do, right? The filing, the, you know, the, the customer service, the receptionist role, that kind of thing, we, you’re not, you’re not gonna need it because the AI can do it. Well, here’s the thing, the long term problem that that is gonna create and people, uh, much more than me have already been publishing on this topic. So the, the, the effect of that is going to be wiping out. Those 1 and 2nd jobs that kids get out of college at these organizations. And let’s say you cut to like 6 or 7 years from now when you would typically be promoting that 1 or 2nd job. Occupant into that third level role. Maybe now they’re gonna manage somebody, right? Maybe now they’re they’re gonna be outward facing and really speak for your organization in public, right? They’re gonna have to have developed the soft skills. One of the misconceptions that I think even the makers of these systems are operating under is that somehow we will carry forward. You know, and this is funny for people who really are eschewing college and saying you don’t need a university degree and education isn’t really necessary, all that stuff, right? That what they, I think, have failed to recognize in themselves, the assumption. That there that you’re gonna carry forward the critical thinking and the socialization that education gives you in theory, and as a result, you’re gonna be in a position where nobody’s qualified for your third level. You know, you’re that 3rd job for that first management position, whatever it is and so the, the friction, right, the blank screen uh uh piece of of human preserving illogic that I think your listeners should maybe consider is you’re gonna need to find another way to train somebody up in that role. So even though you’re gonna have somebody saying, listen, you can take it off your balance sheet that being a customer service person, don’t do it. Use the technology if you feel it’s, it’s ready for it, but you need to save that money because what you’re gonna need to do is bring some somebody in just out of college and put them in a kind of. I don’t know, we could call it an apprenticeship, a residency, something where they’re in your organization, learning the ropes, learning the place, learning to be reliable and professional, learning to be the kind of person you’re gonna value in a few years. And bring them up in that role. One of the things that you could conceivably do with that person, for instance, is give them an enormously more creative set of responsibilities. You could say, listen, I need you to be coming up with the weirdest ideas you have, right? I need you, I need your weirdest thoughts for how we’re gonna like breakthrough on TikTok or your weirdest thoughts on how we should, you know, assemble a youth advisory committee or you know whatever but like. The the market logic and the sales pitch of these companies is gonna be you don’t need this stuff anymore, but I think instead that the illogical and absolutely crucial role of somebody leading an organization is going to be. Keep that budget, save that budget, and use it to make your people better. Because if you don’t, this technology is going to rob you of those people just when you need them in a few years. And you mentioned this, I’m just, I just reinforcing that it’s it’s not just the uh like the intellectual part of the work, but it’s the socialization. It’s, it’s the socialization to your, to your organization’s culture. That someone who starts and works their way up over 456 years isn’t going to have when they come in day one and they’re leading a team now and they have to they have to understand the culture and the team and the substantive work. I think that’s absolutely right. The socialization, the, the, um, the self-discipline. I mean, I was talking to a guy the other day. Who I was, I was at a dinner party and he and he he was lamenting that his daughter doesn’t want to be a lawyer or an engineer. She wants to be an artist, and I said, I don’t know, man, have you seen what’s going on with lawyers and engineers right now? Like I don’t know that your kid would actually do much better in that market at the moment. And and if anything, I found myself thinking, man, if you actually, you know, I, I’m again I’m I’m, you can hear my love for my 12 year old, she’s she’s like bjork. She just like, like creativity comes out of her and. She is driven by that creativity to be fairly disciplined about her output. She really, you know, she just did the uh the Nano Remo, the uh, write, write a novel in a single month thing in November. It’s a, it’s a writing event in which you try and bang out a novel in a single month, you know, she sat there and banged that thing out, you know, and, and to my mind like. I said to him, I was like, a prolific and disciplined creative person could be in fact the most valuable kind of professional in the future, because everything else in the market is gonna tell you, you don’t need that, right? But if you can actually generate real self-discipline and real organization to your creative thinking. That’s the one thing I think that could that that the market won’t be able to take away from you or you, you know, or that or that will still be really, really powerful and and valuable in the market. So I just hope that that your listeners like everybody will will just think about like how do we protect that rather than fall for this line that oh you can operate a, you know, I mean Sam Altman uh at at the beginning of 2024 said to a podcaster, Alexis Ohanian, he said. Um, that he and his buddies, his tech CEO buddies have a bet going as to when we will see the 1st $1 billion company. Right, that’s their vision. It’s not. Freeing your time and making you more creative, it is. Pillars of wealth, you know, towers of wealth, uh, where no one else gets a job, and that doesn’t feel to me like, like what what we should be working toward. That’s dystopian to me. Pillars and silos and one person if, if, if one person so one each person has a building then they can with that kind of wealth and so many buildings you get to have, you know, that’s right. I don’t know after the, after the, after the tech bros by all the all the land, buy all the real estate, where do the rest of us. Well, you know, we found out that. This also quaint, I suppose. We found out during COVID. What, what jobs are truly important. To our, to our functioning, to our survival. Now I live on a, I live in a little beach town in North Carolina that the ocean is across the street. I’m on, I’m one tier of one row of houses away from the beach. And to me it was the garbage men. Uh, the, the food store workers, my local food store, um, uh, restaurant workers, but, but in my town, uh, it’s so small, we don’t have delivery, you have to go pick up, but they were hopping, they were hopping the meals out into the parking lot, you know, in bags. You, you couldn’t go in the restaurant, um. Postal workers? I, I, I, I still needed my mail. Um, technology, I, I don’t want to go too far, but I needed my ins I needed my technology infrastructure. I needed my spectrum to be working. Uh, I, I mean, I, I think that’s, I, I don’t know, maybe, maybe I’m missing one or two things, but the, the, the people who are actually doing the work, I would have been fine if, if Wall Street had shut down and I, I would, my money would have gone in the bank and it wouldn’t earn as much, but I’d still be surviving. I’d still be, I still, if I have the garbage being picked up and if I can go buy food. Then I can still survive quite well without without Wall Street functioning. Well, this is the thing is that it’s not clear to me, you know, that the vision that so you know, the, the folks making these technologies, the people who are at the top of the Magnificent Seven, as the big companies are called, right, they, they really seem to believe and I I think they really believe in their heart of hearts that there is a utopia coming. In which, you know, made possible by AI in which we eradicate cancer and we, you know, and, and everybody gets to be a watercolorist in their garden kind of and, and, and I just think that that. There are multiple problems with that. One, the math just doesn’t quite work out, uh, on, on everybody getting to relax all the time. Two, culturally in the United States, especially and especially in Western democracies, we don’t like to just let people hang out. We really are allergic to the idea of paying people for free time. Three, there’s something called the Jevons paradox. William Jevons was a 19th century British economist who came up with this paradox where we were burning coal more efficiently than ever, and yet we were burning more of it than ever. And it’s come to be, uh, used to describe basically any case in which you use something more efficiently, you wind up using more of it. And I think that’s gonna be true of free time, uh, of just our time in general, that if you, if you give somebody the opportunity to work. As hard as 10 people, then they’re gonna end up having to work as hard as 10 people. It’s not that they’re gonna get 10, you know, have to work 1/10 as hard, you know, only a 10% of the time. That’s not how it works. And well, we saw that play out with the, with the, the, the, the free time that we were all gonna get from smartphones because they were gonna make us so much more, so much more productive. We’re gonna have a plethora of free time. I, I, I haven’t seen any more free time. I see you can’t get away for more hours. That’s right. And working in environments where I didn’t previously work. This is the reason I go camping all summer, productivity filled those, those additional, uh, illusory hours. That’s right. That’s right. And so, you know, I just think that the, the what, what, what these folks seem to believe can happen. I don’t think is actually something that that we culturally or economically are are prepared to actually make possible. And you know, so the founder of Anthropic, he was just profiled on 60 Minutes the other night and, and in an interview with uh with Axios, uh, this year, he said a really interesting thing where he basically said, The truth of this, I’m paraphrasing here, but basically the truth of the matter, he said, was essentially This technology might make it possible for us to cure cancer and make and create vast riches, and we might have 20% unemployment. Basically that was his that was his thing, right? And I admire him saying that out loud. Like I’m glad that he’s that he’s making that warning. I’m not sure why a guy who knows that keeps plowing ahead on making the technology, but OK, but I, I think there’s there’s no because he’s not in the 20%. Well, that’s correct. That’s right. That’s right he aspires to be in the 0.001%, about towers of money. That’s right. That’s right. You know, I, I think that we, this, it is going to require a fundamental renegotiation about the value of human beings, you know, that’s gonna have to do with, with both a sense of purpose, right? It’s my daughter saying I like to draw a thing even though it’s not as good as as what Cha GBT makes, right, a feeling of purpose and accomplishment and. GDP is, you know, your, your productivity. I think we’re gonna have to divorce our financial value from our productivity at some point, and I don’t know, I don’t think we’re on the path to that. Like I don’t think we’re very good at that in this country, but, but that’s what I think this technology is gonna sort of force on us because we can’t all work 10 times as hard for 1/10 of the money. That’s not gonna work. Let’s focus more on uh on some, some forms of restraint. So if you can do, if you can share one more on an institutional level, and then I’d like to spend a few minutes on restraint in individual personal restraint, but you got one more for one more that you can share on the, on the nonprofit level. Well, on the nonprofit level, I really just think that that what you have described several times in this conversation is is the right thing to think about, which is that you have to protect. The awkward open. Creative tasks, even if the technology is is saying, oh no, don’t worry about it, I’ll take care of that. And so, you know, we’re seeing, I think the best companies thinking about this stuff now are recognizing. We’re not gonna, we’re not gonna cut our costs yet. We’re gonna hold off on that this is not unfortunately the, the majority of these companies. Majority of these companies are just going ahead and cutting costs even before they know whether AI can can fill in, um, but the, the smart companies I think are saying OK. We have these tools. In theory, it’s gonna automate this set of tasks. Let us as a result hand that time back to people, right? If we’re really gonna buy this idea that this technology is going to free us up and let us become a better version of ourselves, we have to enact that. We have to enact that. You can’t, I’m skeptical about that. Yeah, me too, me too. But if in theory I don’t have to, you know, uh, spend as much time doing a travel booking or spend as much time making up a calendar of events or whatever the thing is, right? Then the expectation I think should be that that person gets to spend that saved time doing something that you know that is long term thinking that is you know uh cultivating some some creative priorities that is connecting with other people in a way they don’t normally get to. And so while the while the the promise of this stuff in theory is to save you work, I think especially as the leader of a nonprofit. You’re actually gonna have to do a little more work in the short term to think, OK, if I, if I could give each of my people 20% of their time back, what would I want them to do with that time and getting getting that air traffic control system built, I think is the is the institutional is the short term institutional thing before you start saying I don’t need this person, which is what all these companies are gonna try and convince you is the answer. An example of that could be uh sabbaticals, additional time off like 4 day, I mean there is, there is a national, there is a national organization for a 4 day work week, a 4, a 4 day 32 hour work week, not a 4 day 40 hour work week, a 4 day 32 hour work week. I’ve had guests on who have implemented that in their companies and with great success, 32 32 hour, 4 day work week. So those, you know, those types of You know, the, uh, right, that’s that capitalizing on those types of those claims, promises. You know, that’s when people start to see that the technology really does work for everyone. Well, that’s right, and that’s how you get and that’s an egalitarian that’s right. And that’s how you get people to use it in a smart and creative way. You get them motivated to use it in a way that that is beneficial. If you instead say to them, yeah, you’re gonna work twice as hard now because I expect you to using these tools, that’s where you get people not. Using them well, perhaps using them to, you know, uh, you know, plagiarize and, you know, create a, you know, create reports based on fictital fictitious studies and all that stuff, you know what I mean? So I feel like the best behavior with AI is going to be made possible by, by what you’re describing there, Tony. And I, I, I think taking some of the, the, the potential bad outcomes to an extreme. I, I think that’s what leads to the marginalization of large swaths of our population who then look to a person or a technology to lead them to, to that promises them retribution. You know, you’ve been, you’ve been wronged by I’ll, I’ll be generous and put it in the tech now you’ve been wronged by the technology. Over these years, and I will write that. I, I will give you, I will give you your voice back. I hear your grievance and retribution is coming when, when I’m in power. I mean, I, I think that I, I don’t, I don’t think that’s such a, such a far stretch from, you know, the sort of the disenfranchisement and, and, and frustration that. You know, we see for, for other reasons now. I think technology could lead us to, to that, to a, to a continued. Frustrated population. Let, let’s go to the uh the individual on the individual side. Restraint, restraint. Let, let’s, I want to leave folks with things that they can think about and, and either doing or think about doing for themselves to, to break us free of everything we just spent, you know, 55 minutes talking about. Right. Well, one thing I, I, I feel I always want to sort of say before offering advice on on how one can change their behavior as an individual is that I don’t think it’s up to you, you individual to Uh, push back against this tide. I do think it’s a tide, and I think that some forces much larger than us are gonna have to go to work on this problem, uh, for, um, some balance to come out on this. I’m a big fan of, of some of the, uh, you know, I, I think lawyers go a long way in in making change in this country. The reason that you and I are not smoking cigarettes. Uh you know, in this conversation and that we drive a car with an airbag system right there a seatbelt, yeah, that’s go back to the 70s seatbelts. Yeah, so, so I’m a big fan of the American liability regime for that reason, and I think some stuff is coming to help uh with this, with these things that are gonna be helpful. But in the meantime, I think that first of all, from a just from a personal perspective. Keep getting this technology to explain its limitations to you. I think any prompt you put in there that says, explain to me what you can and can’t do is a very helpful way of just reminding yourself, oh, this thing is just. Mimicking language. This is not a guru. This is not an expert. This is just a parrot, a really good parrot, right? That to me is a really important just sort of refresher. You can even set some of these LLMs to remind you of that stuff, right? It’ll ask you, you know, well, what kind of personality would you like me to be? And you can, you can make it as smooth and as fun to play with to talk to as you want to, but I would argue better to make it. To, to have it keep reminding you about what it is and what it isn’t, you know, make it not as much fun to sit with that thing. And as a result, You know, make it, you know, you can even use it to tell you, OK, that’s enough for a little while. I think you need to build an hour into your day where you’re literally doing nothing, because this is the other thing that we’re, you know, that we were already in this situation with smartphones, but we need to get back to a world in which you are preserving a little chunk of your day in which absolutely nothing is going on, to just get your brain the chance to to get back to its normal state. You know, for my, for my money, I really like, for instance, going outside, not just to get the sun on you but also so that your eyes have to see something that’s more than 30 ft from you. If you can get your eyes to focus on a distant horizon object, as a, as a former New Yorker, this was always the challenge. Can you find an avenue where you can look at something that’s more than 30 look down a street, yeah, you need an avenue rather than rather than a street, um, you know, the, the. The protecting of your brain’s, uh, calm creative space, driving without any uh stimulation other than what’s in front of you in the wheel. That’s how you’re gonna get your brain back to a sort of a, a, a healthier place. And then the last thing I would just say is, and this is true in social media too, I do a lot of work um speaking to to educators and school groups and so forth and and uh my I helped, I was part of a group that pioneered some. Pretty restrictive digital rules in our schools, uh, here in Oakland. But you know, one of the things, one of the lessons is, use it to go get what you need. And then when it brings it back to you, turn it off. So don’t, you know, when it brings the research back to you. Read that research. Don’t just let it then summarize that research for you and keep going, right? Um, the, the system is going to be is going to constantly ask you, what else can I do for you? What else, what, what next? What next, right? Don’t fall into that trap. Instead, you got to turn away from it, right? So that it is not the intellectual equivalent of the passive feed which was social media, right? Social media changed what was once a you go and get what you’re looking for model. Into a just sit here and swipe and we’ll take care of the rest kind of model and so using it to go get a thing or or you know fetch you what you need and then turning away from it I think is is as close to a mentally healthy interaction with the system as you can as you can uh do at the moment. Um, I don’t claim to have the answers to this because this is like I say, this is the perfect. Uh, hack for our brains, I think. And so the idea that human beings have to be somehow responsible for this, it reminds me like I’m a former drinker. I don’t drink anymore, and, uh, and you know when the when the liquor ads say drink responsibly, it makes me crazy because there’s no such thing for me, right? And I think that that we’re gonna find that in many areas of cognition and interaction there’s no such thing as using AI responsibly, but for the moment, you gotta start trying to make up those rules for yourself and those are a couple that I follow. Well, you and I could talk about this over a cup cup of coffee and tea. Uh, you can drink instead of a bar. I love to watch other people be drink. I just can’t go to the bar. That’s right. All right, Jacob Ward, veteran journalist. You’ll find him at by Jacob Ward in all the social networks. Uh, the book is, uh, the loop how AI is Creating a World Without Choices and how to fight back. Jacob, thanks very much for sharing your, your research, your thinking, vibrant conversation. Thanks so much, Tony. I really appreciate you being here. I’ll just shameless plug. I run a um a newsletter called the Rip Current at the ripcurrent.com. If anyone wants to hear me rant like this on a regular basis, that’s, that’s where we’re at. So thank you so much, Tony, for this time. I really appreciate you. My pleasure. Thank you. Next week That’s a very good question. What’s certain, it’ll be our last show of 2025. Talk about lackluster host, can’t even come up with a show for the next following week. It’s unbelievable. Horrible. If you missed any part of this week’s show, I beseech you. Find it at Tony Martignetti.com. Tragic actually is what it is, tragic, tragic. But your, but your intro today was nice and, uh, I heard like the echo. Cuz without the mic, you don’t have the uh. How is it called like the sound. You know how you’re a really good mic, it takes away like all the extra sound out here. Yeah, the ambient, the ambient noise. Yes. Well, when you went to do your intro today, I heard all of it. Right. Right. So what does that have to do with not having knowing what show is gonna be next week? I don’t know. I had a I had a point, so, we, we gotta get you on a plane. That’s the problem. You never, you never have done air travels. You, you’re confused. All right. Our creative producer is Claire Meyerhoff. I’m your associate producer Kate Martignetti. The show social media is by Susan Chavez. Mark Silverman is our web guy, and this music is by Scott Stein. Thank you for that affirmation, Scotty. Be with us next week for nonprofit radio, big nonprofit ideas for the other 95%. Go out and be great.