Tag Archives: artificial intelligence

Nonprofit Radio for January 5, 2026: 2026 Outlook

 

Amy Sample Ward & Gene Takagi: 2026 Outlook

Our Esteemed Contributors kick off 2026 to share what they’re looking out for in the New Year. We talk about increased hesitation around AI adoption; mitigating the risks of political, legal and PR attacks; your board’s role in protecting your nonprofit; increased collaborations between nonprofits; data protection; overcoming fears; and, a lot more. They’re Amy Sample Ward, our tech contributor and CEO of NTEN, and Gene Takagi, our legal contributor and principal attorney at NEO, the Nonprofit and Exempt Organizations Law Group.

Gene Takagi

 

 

 

 

 

Listen to the podcast

Get Nonprofit Radio insider alerts

 

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

And welcome to Tony Martignetti Nonprofit Radio. Big nonprofit ideas for the other 95%. I’m your aptly named host and the pod father of your favorite hebdominal podcast. Happy New Year. We’ll have more to say about this. Coming up. Yes, excitement for next year. No, what am I saying? Excitement for this year. Well, it was next year when we’re recording, but it’s this year now. It’s this year, this year. Happy New Year for this year. And I’m glad you’re with us. You’d get slapped with a diagnosis you’d, you’d get slapped with a diagnosis of neoenophobia if you feared our New Year show. Here’s our not new. Well-seasoned associate producer, Kate, with what’s up this week. Hey Tony, happy New Year. Thank you. Would you call me well seasoned? Yeah. Got me. Here’s what’s up. 2026 outlook. Our esteemed contributors kick off 2026 to share what they’re looking out for in the new year. We talk about increased hesitation around AI adoption. Mitigating the risks of political, legal, and PR attacks, increased collaborations between nonprofits, data protection, overcoming fears, and a lot more. They are Amy Sample Ward, our tech contributor and CEO of N10, and Gene Takagi, our legal contributor and principal attorney at NEO, the nonprofit and exempt Organizations Law Group. On Tony’s take 2. I’m excited for 2026. Here is 2026 Outlook. It’s a genuine pleasure to welcome and say Happy New Year. To our two esteemed contributors to nonprofit radio. Amy Sample Ward is our technology contributor and the CEO of N10. They were awarded a 2023 Bosch Foundation fellowship, and their most recent co-authored book is The Tech That Comes Next about equity and inclusiveness in technology development. You’ll find them on Blue sky as Amy Sample Ward. Gene Takagi is our legal contributor and principal of NEO, the nonprofit and exempt organization’s law group in San Francisco. He edits that wildly popular nonprofit law blog.com. The firm is at neolawgroup.com and he’s at GTAC, as he has been for many, many years. Happy New Year. Welcome, Amy. Welcome, Gene. Happy New Year. Happy New Year. I really appreciate. The level of enthusiasm that you bring, Tony, and I will feed off of it to have a, have a smile on my face as we have this, what is likely very intense conversation about, uh, intense, yes, intense, but, uh, valuable, uh, and valuable, not but intense and valuable and informative. Uh, all right, I’m, I’m happy to spread, uh, enthusiasm. I hope, I’m glad it’s infectious. All right. Um, so we’re, we’re talking about the outlooks. You know, what, what are we, uh, anticipating, paying great attention to? Uh, in this new year 2026. Uh, Jean, let’s start with you. Uh, you’re, you’re concerned about, uh, risks to nonprofits in terms of our, our, uh, political. People, uh, I almost say foes, but a lot of our foes, but not all are foes. Political folks, uh, legal attacks, uh, the, the, the community, the sector, we’re still as we were in 2025, um, at risk, you believe. Yeah, amidst all the rainbows, lollipops, and roses that we should all celebrate, um, yeah, there are, there are a few troubling aspects of 2025 that will probably linger through 2026 and beyond that we have to think about. So, you know, I think, you know, just to start us off, it’s good to sort of take the risks into different buckets a little bit. Um, and so, you know, there are the legal risks, of course, and those of. Long been in existence and people and organizations can manage around that, but there are political risks now and a lot of what is coming out from the federal administration and those that support the federal administration are politically driven risks that line up with with the administration’s agenda so there’s that bucket of risks. Then there’s the whole public relations risk uh as we are kind of in in this um. Very polarized society and there are people taking sides and and various ways of attacking not just organizations but individuals within organizations and so you’ve got all of these areas of risks to think about and. I, you know, I, I think the overall goal where I’m hoping to continue to see those, you know, raindrop rainbows and lollipops and roses is just to, to be calm and, and sort of just say, hey, we’re all mission driven organizations. We know the game, um, it’s a changed playing field over the last year, but we still know what our goals are. We still know what our mission is. We still know what we need to do, and we’re going to keep trying to do it because that’s what we always do. And um just to keep our eyes open on that, yes, the playing field has changed. We’ve got risks to to to think about, um, but let’s look to. You know, sources like Tony Martignetti nonprofit Radio, so we can help manage those risks and stay calm and stay focused and and on task. All right, so Amy, Jean is. Helping, you know, wants us to be grounded, like, you know, grounded in our mission. Yeah. Uh, are, are you, uh, are you on board with that? Absolutely on board, um. You know, in December, I had the privilege and opportunity to be at a couple different gatherings, one focused on cybersecurity, one focused on a regional gathering of, of nonprofits, so all across different mission areas and Two conversations from those spaces kind of carrying into our new year outlook here and, and maybe this idea of being grounded, is that I think there’s, when it feels like every single thing needs work and every single thing is hard, it’s so Easy to be overwhelmed and be like, OK, well, it’s not even worth doing anything on security unless we can do everything. And that’s just not the case, right? Being grounded in that same feeling of, as Gane said, like, we know what our work is, we know what our mission is, is bring that same attitude to all of these pieces of technology and data and security that might be on your list, might be things you know you need to work on is. Doing one of those things is better doing, doing none of those things. And just as much as you know your mission, and so you’re not going to get distracted by the politicization of every single one of our missions right now, you’re gonna stay focused. I think that’s the perfect framing for making these decisions around technology or security or data. You know your mission. And if your community, your constituents, your service recipients are not safe to receive those services from you, then you’re not able to meet your mission. So if you have to make a decision about, should we collect this data? Should we store it over here? Should we have a web form for this, you can go back to that frame and say, Would this form make our community members unsafe? If so, how do we get rid of the form, right? OK, our funder requires that we report the number of people in our services that are in the county, because the county is our funder, right? Great. You can collect if someone is in the county, you don’t have to store that check box or their home address in their profile in your database, right? That, that data can live in different places and that protects those community members. If you ever had A subpoena or a request for that data, right? So if you can come back to this, we do know our mission. We are not being distracted from it, and we are going to keep our people safe. It really is, I think, a strong Impractical for every single staff person, not just someone on a, on a technology team to make a choice about technology systems or, you know, how you’re implementing data. That’s one bit of grounding I wanted to offer. OK, let me, let, before you go to your next, that is, it’s, it’s very consistent with the last time you were on. Which was like, uh, I don’t know, September or October of last year. And, you know, you take one step at a time. It’s better to do one step than to do no step. It’s better to take one thing than, than ignore all three because they all seem so big. And, and what we’re really adding is, and I think this is also consistent with the last time we, the three of us were together, you know, it’s not only our mission that grounds us, but now we’re, you know, we’re, we’re, we’re more focused than, We were in 2024 on protecting those, uh, on our team, uh, those, those we’re helping, those getting our services. I mean, that, it’s not that we ignored that in the past, but it, it’s a, it’s a greater area of focus now, you know, protection around data, technology, security, protection of those folks is, is just more of a focus. So I, you know, it’s, it’s like you have this through line, you know, it’s still you’re consistent. Yes, totally. I appreciate you naming that. The other piece I wanted to carry forward from these recent conversations I had at, at these conferences are folks were, you know, I may be talking about data privacy and, and how vulnerable data is in our organizations. And folks said, OK, well, so what’s our, what, how do we change our policies for these next 3 years. And I just do not think that is the approach I would ever recommend. I think it is, how do you improve your policies forever? Why would you ever want to say that your data is knowingly more vulnerable, right? Um, and I also think that you’re going to have far less staff fidelity to Your policies if you present them as a temporarily changed policy and then we’re gonna change them back we’re like how am I going to remember? I’m just going to do the one I know, right? And then in 2029 we’re going to become what we’re less secure. We’re gonna, we’re going to go back to our less secure policies, right, because this is, I really think not a matter of. Who is in office in any level of office or anything in a beautiful, equitable world that I know we’re all here to build. I would want my data completely under my control, no matter where it is, right? So why would I say, well, until we get there, let’s go ahead and have these really bad data policies, right? No, let’s, let’s build those policies now and, and manage them, train staff around them, and build the confidence in our constituents that if you share data with us, Oh my gosh, like we are protecting it at all costs. We are making sure that program history or service recipient, you know, access is anonymized away from your address or or your demographics, right? Like we’re, we’re really protecting you if your data is with us. That should always be trust that you want to be building, right? So as people are, are Maybe building some of these data retention or or um data cleaning policies for the first time, or updating them because of these more urgent uh priorities, really don’t think of it as a, as a, we’re just gonna do this right now. Like this, this is building your policies towards the world you want them to be, and as strong and safe as they can be. Yeah, 100% on what Amy is saying, and, and I, I’ll just add that, you know, I, I neglected to talk about financial risks as well. And where it comes to Amy’s point is. The stuff that you need to do requires an investment, and this is at a time when a lot of nonprofits are very resource constrained. A lot of grants have been pulled. A lot of funders are deciding to get more conservative at this point where we hope that they would actually step up. But not everybody is. There are some like notable exceptions out there, uh, but, um, we need more funders and we need more advocacy for those funders as well. But in light of those resource cons you know, sort of constraints, nonprofits have to make really difficult decisions of saying, and, and I’ll just put it bluntly, we may not be able to. Give the same level of service and support to our beneficiaries now, but we need to do that so we can protect them in the future. We need to protect our mission. We need to protect our our team so we don’t lose everybody. We need to protect their safety and make them feel comfortable. We need to protect the whole infrastructure. And support system so we just don’t vanish, uh, which would hurt our beneficiaries much more. But those are difficult decisions because that may mean pulling back on some of the services you can give or not expanding or even contracting who you can help. So really tough decisions. I don’t want to make light of that at all, um, but these decisions have to be made, um, otherwise there are some really, really sort of. Um, bad places that the organization can go at the detriment of the mission, which is why the organization exists in the first place. I want to build on something that that Jean’s talking about here and connect back to, I think. Previous times when the three of us have talked together, you know, also talked about the board’s role in all of this. And I think this financial piece Jean’s bringing up often gets to be the front and center piece of board conversations, right? The finance, our fiscal, uh, fiduciary duty and, you know, making sure that the organization is financially stable or able to move forward. But in that same way, I think our duty of care as board members requires that not that board members are Taking action and logging into this database. Like, I don’t think board members need access to your technology systems, but board members should absolutely be asking, what is our data retention policy? I want to be sure I understand it as a board member and I understand the risk of our constituents based on what our data retention policy is. And I want to know that staff are implementing it and deleting records when they should be deleting them, right? I want to know that we have a staff cybersecurity plan that staff know what to do if You showed up to work and nobody could log into their email because your account had been taken over, right? Would staff know what to do? Again, board members don’t need to be doing anything in it. They don’t need to be deciding all these things. But if you are a board member, or if you’re a staff person who staffs the board, these Might feel like technology conversations, but the board should know them and should be able to say with confidence, they know what’s happening with data, with security in, in the organization systems, just the same way that maybe they, they really ask hard questions around financials. Financials or human resources, you know, do we have a non-discrimination policy, you know, how are we protecting, uh, preserving people’s workplace, you know, equity, etc. Yeah, yeah, it’s good. Thank you for reminding us of the board’s role, um. Gene, you, you, you, you talk to a lot of boards, Gene. What, what, what do you, uh, what are you hearing from, from those key volunteers? Well, I, I, I’m hearing, um. You know, quite a bit of concern in some cases, you know, fear about whether their organizations need to scrub their websites, whether they need to change their programs, whether they need to stand up and double down on their messaging, whether they can include things like DEI or abortion in their name or mission. Whether they need to change how they report things on their Form 990s. Um, so all sorts of things that that boards are considering right now, um, and just to add on Amy’s point and, and, um. All of those decisions are so important for the board, but getting back to the financial piece, what boards can do is say, yes, you know, the financial sort of um governance is part of our job, but we also need to think about how the financials are going to support. This other stuff that we do, it’s not just about supporting our beneficiaries for right now. It’s about are we setting up systems to protect our beneficiaries from things they may not even be thinking about, but all of their privacy data, like all of that. That takes an investment to have a data retention plan and and sort of implement it and enforce it. That takes some work, and that takes HR time so just making sure that the board is down with it, that maybe you can’t quite do things just the same way you have been and you can’t go back to Amy’s point, um, you’re going to do things moving forward. In a stronger, healthier way, but maybe you’re going to have to do less of that while the money, you know, situation has contracted. So those are the tough decisions that that a lot of organizations are facing, and some won’t make it. Some, some, you know, you know, with the collapse in funding in the political and legal environment right now, there are some organizations that definitely won’t make it. But how does their mission go on? How do they make, you know, take advantage of. Others to be able to continue to provide services for their beneficiaries even if the organization itself doesn’t exist. Those are things boards need to be thinking about if they are kind of in that zone of insolvency right now. They’ve got to really be thinking very strongly about protecting their beneficiaries and advancing their mission even beyond the organization. I, I have sort of a poignant story. That, uh, happened to me last month and, uh, we’re talking about boards and Gene mentioned, well, you both mentioned equity and, you know, Um, and I only told the story. Yeah, I can anonymize it. I only told the story to one person. I told it to my wife. That’s it. But I, I think it’s instructive and cause I, you know, I didn’t tell anybody else because I, it’s not. Uh, a story that like I’m looking for like self, you know, I’m trying to self-aggrandize or something, but. It was a client, I, I was at a client, I was away in another state. I traveled to a client board meeting to present about planned giving, and I went to a terrific, uh, social the night before the board meeting, met all the board members informally over drinks and apps, you know, it was lovely. Um, I did some training that afternoon, again, the day before the board meeting with the staff. It was fun. It was like 87 or 8 of us. It was fun and, you know, I try to have fun trainings. Um, And then the board meeting came the next day, 9 o’clock in the morning. And they went through a bunch of, uh, agenda items, votes, votes, all unanimous, all unanimous, you know, it was sort of pro forma votes. Uh, they were approved the, the, yeah, and some of the things that, that, you know, uh, uh, approve this transfer for a scholarship, etc. And then came the, and then came the, um, then came a. Uh, a bylaws vote. And um they have a, they had. A sentence in there. Bylaws that said that the, uh, it was either the board or the board, um, What’s it called with the recruiting, the, the recruiting, the recruiting committee. What, what’s the, uh, you know, the, yeah, no, but nominating committee. It was either the board or the, thank you, it was either the board or the nominating committee will make best efforts to ensure that the board, uh, is non non-discriminatory, uh, not is equitable, and, you know, they would make best efforts to. Have a, um, a diverse, have a diverse board, that’s the word, OK, have a diverse board in terms of, and then they mentioned a bunch of characteristics, you know, gender and, um, income and age and location and things like that. And the vote was to eliminate that sentence, that single sentence from their bylaws, because the, they believed it’s now contrary to federal law. I don’t know whether that’s true or not, but. That’s, that was the explanation. And I, I, I can, I, I’ll never forget seeing that sentence. I mean, it was, it was the bylaws page was projected on the, on the screen so everybody could see it. And there were two remote board members at the meeting, but you know, the vast majority of the board was in person. But it’s up on the screen and it’s highlighted in yellow and it’s struck through, strike-through font, you know, that sentence about ensuring diversity on the board. Highlighted in yellow and struck through. This was not pro forma and it did have a fair number of comments. Only one board member spoke in opposition to the To the, to the, to the, uh, motion. Um, and I couldn’t see that person. They were, they were one of the two that was, uh, virtual. So it passed. It passed unanimously. Even, even the board member who spoke in opposition, that person either, well, they might not have voted. Either they didn’t vote or they, they voted for the the motion. I hope they voted again, they, I hope they didn’t vote, but they didn’t vote against it because there were no no votes. So it passed as far as I could tell unanimously. And I was just, I was struck as they were having this conversation. I, like, my head was in my, my, my palms and I was, my heart was pounding. And I was, I was just thinking, you know, if this is, if this is what they’re gonna do, it’s going this way. And then it did happen and. I thought, you know, this is like, this is a, this is a horrific moment. If I don’t, if I don’t do so, if I don’t stand up, then I’m acquiescing in this vote. And um, I got up and I just quietly, I walked over to the person who was my primary contact at the, at the organization, and he was actually leading the meeting too, uh, as the, as the staff person leading the meeting. I mean, the, the board chair led the meeting, but he was the staff person sitting right next to them. And I just, I went over and I whispered, uh, you know, in light of this vote, I, I can’t work with you anymore, and I, I wish you the best, uh, uh, for your plan giving program, but, uh, I can’t be with you anymore. And I shook his hand and I, And then I quietly exited. I didn’t say anything to the whole room. There’s no ground speech, no grandiose thing. I just whispered this to him. I mean, it was obvious. I was the one person in the room standing up now after this vote had just passed. Um, and then I said the same thing to another person who I had worked closely with for the 5 or so months that we were working together. I said the same thing to that, that, that person, and then I just quietly walked out of the room. You know, so we, we. It’s, and I sent them, I owe them money. I sent them a refund check for the balance of the retainer that they had paid me, that, that I hadn’t earned. You know, so I, I just, you know, you, you cannot, I, I, you cannot be witness to this. I mean, uh, maybe I’m a hypocrite, but do I check every board’s, every nonprofit’s bylaws to look for an equity and diversity statement in their bylaws? No, I don’t do that. I don’t. I don’t. But when it was, it was right there smacking me in the face to vote. Uh, uh, so I, I couldn’t, you know, I just couldn’t continue and that was, that was it. So I don’t know, uh, what’s the value of the story? I, I think we, we have to take a stand, you know, make a stand. Again, maybe I’m a hypocrite because I don’t check this for all the clients that I work with. I don’t. But when you’re smacked in the face with the, the, the elimination of the, the diversity initiative on the board. You know, I just think, I mean, that was just a, it was too far. And so we all have our boundaries, we all have our lines. That this is not a prescription for anybody else’s, but if you feel that something is not right, I mean, you have to, you have to, in your quiet way or make loud way, you do it any way you want. Um, in your way. You have to, you have to object. Mhm. Yeah, I appreciate that story so much, Tony. I, I, I hope individual board members can kind of take that, uh, as an example. Uh, I’ll let you know that I’m not so put off with. The sentence and the bylaws, which is a rare one to see, um, very few organizations would have it, but as, as you said, when you vote to eliminate it. There’s like one of two reasons. One is fear, and I think that was probably misplaced fear because a statement of of exercising good faith and best efforts to have a diverse board, there’s nothing illegal about that. It’s the rhetoric that’s coming that’s scary and media misrepresentation is not. Maybe not intentional in some cases, but just summarizing more nuanced language that sometimes comes out of government agencies or even the executive orders that are summarized in simplistic ways that make it sound like everything. Like DEI related is illegal, but it’s not illegal, illegal diversity is for you and I talked about maybe Amy, you were with us, illegal diversity, but that doesn’t make all diversity illegal and certainly not that bylaws provision, but if it’s not fear, then it is throwing out perhaps what many believe to be a core value of the organization and that is a reason for somebody who’s very, you know. Tied to that value and I appreciate, you know, that, that you are, Tony, that, you know, well, I, I hope some board, you know, I hope some of the board, I mean, there was the one board member who spoke in opposition. You know, I, to me, uh, you know, I would resign that board. I would resign that seat. Or educate that board to understand that if it was fear based that it was misplaced fear like and get back and in touch with their values so that their values and mission driven, not just purely like the statement in our in our. Uh, our 990 mission statement controls everything that we do. It’s our values and the fundamental value that I think every charitable nonprofit has is to preserve the dignity of the individuals that they benefit and that work for them and you know preserve the dignity of everybody involved. We don’t just serve food in a in a. A trough and say you know this is the way we can maximize the amount of food that we can get out to people, that’s ridiculous. Dignity is at the core of every organization’s values that that I would believe in anyway and if you’re throwing that out, you know, I understand why a board member particularly should walk away from that. It’s time for Tony’s take 2. Thank you, Kate. I am indeed excited for 2026. 1st, it looks like we’re going to have a new sponsor coming shortly. Uh, could still fall through. You never know, you know, like the ink is not on the, the signature is not on the agreement, but it looks very promising. We’ll leave it at that. If they don’t come through, the show is canceled. No, of course, we continue without sponsorship, no sponsorship. I mean, uh, we’re grateful for sponsors, but without them, of course, the show continues. We haven’t had a sponsor all of 2025. Uh, ended, uh, the sponsorship there ended in like March or something. So February, March, so that was Donor Box. So, um, yeah, let’s see what happens. Looks promising. And I am publishing a book this year. September, September is gonna be the publication of my book. Here’s the title. Planned giving accelerated. Finally, someone wrote a cut through the shit, no nonsense, practical step by step guide to launch long-term legacy fundraising at your small to mid-size nonprofit. Simply in one week, and you start with bequests. The title may be longer than the book. That’s OK. You’re gonna be hearing more about this. It’s, uh, I, I think that’s a pretty self-explanatory title. If you were able to stay with it. You know, if you got distracted, it’s easy to get distracted in the middle of the title. You might not have heard the, been conscious of or, you know, actively been listening to the entire title, um, but you, you should have gleaned out of that. Like the title is so long, it needs a takeaway. Uh, the takeaway from the title is that, uh, it’s about launching planned giving at small and mid-size nonprofits. There you go. And I will, of course, be talking more about it. Again, September is the publication date. Uh, I’ll have some, some, um, Early release info for, for listeners, of course, um, discounts on, uh, advanced sales and stuff like that. So you’ll, you’ll be hearing about this through the year. So yeah, so I’m excited for 2026 for a potential new sponsor and a definite new planned giving book. And that’s Tony’s take too. Oh, and Happy New Year again. How come we, we can’t say it enough times because, uh, you know, because I’ll, this, this, this will offset all the holidays that I forget about until the following week. Uh, I forget, the associate producer doesn’t remind me, and, uh, they go unnamed until the following week, which is, is bad. So, multiple Happy New Year’s as, uh, offsetting to the late holidays that the late holiday. Announcements that will come undoubtedly throughout the year. Happy New Year. That is Tony’s take 2. Kate, Happy New Year to you too, Uncle Tony, but also congratulations on your new book, or about to be a new book. About to be 9 months, but it’s coming. Yeah, thank you. It’s, uh, it’s on its way. Thank you very much. Uh, and it was very good to see you over Christmas, you and the family in New Jersey. That was great fun, great fun for several days. Because I’m not a new associate producer, am I safe to assume that I will get a signed copy of this book? Uh, with your payment, yeah, absolutely. If I buy the book, you’ll sign it. Of course, I will. Yeah. It’ll be available for you as, uh, as it will for, uh, millions of others, uh, on Amazon and Barnes and Noble and wherever, wherever fine books are sold. OK, guys, I will be auctioning off a signed book by Tony Martignetti on my Facebook. What a, what an exploitative capitalist. You’re gonna, you’re gonna, I don’t know how much more that’s gonna be worth. Uh, it might actually detract from the value. Oh, because it’s tampered with having my, it’s tampered, right? It’s, it’s, it’s, it’s defaced. It’s, it’s defoliated. It’s spoiled. It’s jaded. It’s cashed. It’s spent. Those are all good words. I don’t know if they all quite fit the meaning, but it’s all close enough. All right. We’ve got Bu but loads more time. Here’s the rest of 2026 Outlook with Amy Sample Ward and Jean Takagi. I really appreciate, well, Tony, you sharing this story and, and taking action. I do want to absolve you of any obligatory guilt, as you’ve named, you know, well, I don’t go and check all these things. There’s no way in our human capacity. We only have so many Beyonce hours in the day, right? So like you, you, it is not reasonable to expect that you’ve had access to or the time to find information on every, every single organization that maybe you give advice to because you also just give advice to people even if they weren’t a paid client, you know, and so that that’s not a reasonable expectation for any of us. And as you said, When the opportunity to stand by your morals and values came up, you did stand by them, right? It’s not, OK, well, I guess you should have, that’s disqualified until you go back in your history and you double check every client you’ve ever had. That’s, that, that’s not reasonable, right? When the opportunity was there, you, you took action. And I appreciate Jean, you bringing up. Uh, values and Helping folks think about that, while also in that same sentence talking about fear because we, we know from both a nonprofit like marketing and advocacy perspective, but also a political advocacy perspective. Fear is so influential because people become immobilized and irrational when they have fear, right? And that’s why fear is the operating model of the last 12 months, because it, it’s so much easier to influence scared people, um, than it is thoughtful, powerful, calm people. Right? And so, if we can use Tony’s story to say that maybe there’s a conversation in your organization, whether it’s with your board or with your, your staff or both, and maybe, and hopefully it’s not the same as Tony’s story, and you’re thinking about, you know, eliminating a sentence like that or, or doing something similar. Whatever it is, I think part of Operating differently right now as it has been in 2025, but will continue in 2026, is not believing that anything is so urgent, you have to operate in fear. That you can take the 30 seconds to walk away from your machine. To take a deep breath and to say, OK, what, what’s actually important as I deal with this potentially phishing attack, or deal with this funder who’s just sent us another decline, or, you know, whatever type of fear-inducing scary message you’re getting or, or conversation you’re about to have. There is no reason you can’t take 10 seconds for that breath to ensure that you’re not operating in fear, because you’re just, you’re not going to serve your mission, you’re not going to serve yourself, you’re not going to serve your community, especially if all of us are operating in fear. The more of us who can take that breath before we make a choice, or take a vote, or make a proposal. that’s going to add up to a lot more calm, confident choices than irrational, scared choices, you know. And, and a big part of that is why the, that’s a big reason why the community needs to stand together. Because that will help, that helps reduce fear. But we know that we’re not alone, we’re not isolated. You know, uh, that everybody’s taking a breath before we come back to this decision. Yeah, and regardless of what your mission is, I mean, I wouldn’t care if it was, I’m not a big supporter of guns, but, but I wouldn’t care if it was the National Rifle Association. I would stand up for their right to exist as much as I did for the Corporation for Public Broadcasting, um, and, and Plan, and Planned Parenthood. Yeah, you know, I don’t care what the mission is. It’s, it’s, it’s, it’s the right to exist and, and the community is so much stronger when it is united, united. And, you know, I think we saw that in the, um, The whole GoFundMe chaos week in, it was October. Uh, the community came together, this beautiful diverse community came together and said, this is too far. You know, this is corporate greed. It’s, it’s overreaching. We don’t know where our data, who gets the data. We, we, we don’t like the fees. And, uh, uh, you know, it played for me, largely, and a lot of people played out on LinkedIn. That’s where I was posting about it and others were as well. And, and, and our big diverse community came together and spoke with one voice. And. They removed the pages or they de-accession the, you know, whether they, they did, they deactivated the pages, they didn’t remove them. The, the 1.4 million that GoFundMe created. So, you know, there was an, uh, an instance of the community coming together in a united voice saying this is wrong. And I would say it took too long for GoFundMe to react, but I wouldn’t say it was too little, too late. It was just too late. They actually did what we asked. They just did it several days later, but they did come around and, and I think that was a, that was a. Uh, a, a big, a big win. Uh, it was, it was something that the community should, should celebrate that we came together around that. Mhm. Yeah. And carry that energy through the rest of the year, you know, I really, I think from the start of 2025 to the end of 2025, there was such a notable and noticeable, and to me, very welcome, even if late. Uh, shift in the sector of saying, oh, we don’t maybe just want to click accept on every single one of these AI tools. Oh, maybe we don’t just want to enable every one of these AI products to suck up all of our data. Like, people actually really came around this year from, oh my gosh, adopted as quickly as possible, which had been carried over from 2024 into, hey, I want to read these terms before we actually use this tool. And I, I hope like that same momentum of If it’s, if it’s bad for you, it’s bad for all of us around GoFundMe, carries forward into folks collectively across the sector saying, hey, unless our data is actually private, which I can tell you, like, it’s not. We’re not using the tool and, and we use our voice, not just. Locally, but also through not having those accounts, right? Our adoption and participation in technology is actually more influential than any money we put at it because as a sector, there’s also lots of programs to give us free access, whatever, right? And so just not adopting some of these tools until they can meet the needs, like we said at the very beginning of this conversation of keeping our mission and our constituents safe and that data safe. I think We, we have nothing to lose in, in saying we demand that before we’re going to use these products. And Amy, clearly artificial intelligence is something to keep an eye on in 2026. It’s, it’s, it’s only accelerating. Uh, are you, are you seeing, so you’re, you’re, you’re seeing greater skepticism now than, than you were earlier in 20 early 2025-ish. Yeah, for sure, more skepticism and also. You know, not that it is or is not skepticism, but it’s just a separate kind of space where organizations are saying, you know, I, I want to know what’s going to happen before we do this. And I think that It isn’t worth adopting at all costs. Like, if we have a bunch of data on folks receiving You know, refugee status, services and and entry. It’s not worth the risk to say that our staff are going to use some AI tool to help them write messages to those people, right? Like, it’s not worth it. And how, and, and again, it’s not because I think there’s a lot of folks across the sector who would identify as a, as a quote unquote AI expert, and I don’t, I don’t believe there are any, but it’s not because people feel they have expertise at the technical level. I think it’s because there’s been enough education and kind of those calm, let’s take a breath and think about this moments where across the sector, folks are saying, Huh, I don’t think this is serving us. And I think there’s something in this that is gonna, we’re gonna have to be accountable, and these platforms are not, and that’s giving me some pause, you know, back to, to board and risk and all of those pieces that Gene’s already outlined. AI is really, I think, prompting folks to say, We just need to have a little bit more information because this is on us if we, if we put the data in there. That’s very gratifying to hear. I, I think that’s enormously healthy. Yeah. And, and, and it goes to what Gene was saying, you know, grounded in the mission. I mean, let’s take a pause and let’s make sure that we are focusing on what is important to us, and that includes in evaluating whether this Not so new, but, but very shiny object really, uh, suits us. Or, or, or does it not? And it’s decision making again from the leaders, not necessarily the board here, but from the leadership of the organization of this AI tool can make things a lot more convenient. You can do a lot with this AI tool, but on the other hand, Many of the leaders of the biggest AI companies now and like the head of the former head of NASDAQ has said AI is an existential threat to human existence that I didn’t, I didn’t hear that. OK, that has got to weigh in to say, well, maybe we need some guardrails here and if the nonprofit sector is not identifying those. The for-profit sector is probably not going to do that. So like this is like really key for organizations to start to think about convenience on one hand, existential threat on the other hand. Where do we stand on our values here? You know, how are our beneficiaries going to ultimately be helped by saving a few minutes each day by using AI tools where we’re just checking the box, agreeing to give up all of our privacy rights to it, and not even know what we’re doing. Um, so really important just to weave together some of the earlier points with this and not as a super scary alarm bell, but I do think that a number of organizations don’t realize that the amount of your data, content writ large are in systems. Held on servers owned by companies that don’t even have to disclose to you that they got a subpoena to turn over your data. And then you think about AI facilitating that extraction. Automatically and rapidly, you know, as far as us thinking about keeping our people safe and really protecting that data. It’s, it is. Kind of like a web and not just, you know, a, a circle that you’re that you’re operating in, because again, you’re thinking about that data in your database, but is your database stored through a vendor that Again, they have a policy, the government could just send the subpoena to them, and they don’t even have to disclose to you that they’re sharing the data out of it, right? So, really, if we’re doing resolutions, 2026 is maybe like truly read the user agreements on the tools that you have, so that you know to what degree you even can contain the data or the impact or or where it’s going. As we’re recording this as well, you know, news came out from the federal administration that there’s a plan to require tourists to give up their rights to their social media posts, to have them reviewed on entry for 5 years, and also have to give up all of the names and contact information of their relatives, including their children as a condition. To be able to enter into the country. Damn, I didn’t hear. I didn’t, I heard about the social media. I didn’t hear about you have to give up all your relatives, relatives part of the same to come visit here. So this is still planned. This is not, so it could be rhetoric and you know, the tourism industry certainly as the World Cup is coming, is certainly going to be alarmed by this. So maybe this is a case where the for-profit sector. We’ll push back on it, but can you think about how AI would be integrated in with such an order and what a nightmare scenario this would be? I don’t know if anybody’s watching Pluribus. Um, there’s this television series on right now where, um, It speaks a little bit to kind of like ultimate threats of where this could lead to, um, and again, not to, not to be an alarmist too much, um, but there are lessons to be learned, uh, from extreme situations and say, well, let’s not go down that road, let’s go down this more beautiful road instead. Damn, but That, uh, uh, it seems, it, it seems not infeasible, but, uh, it seems on such a scale like we have, we have, well, that’s why you have to use AI to do it and yeah, uh, we have tens of millions of visitors to this country, maybe it’s 100, I don’t know, tens of millions of visitors to this country each year. All those re all their relatives and that, that they give it, it’s encompassing everybody in the world. Tens of millions of people times we all have like 3 or 4 family members at least. Uh, we all have parents. Uh, I don’t know. Uh, OK. It seems like at a scale, it’s just, I don’t know, it sounds like Stephen Miller didn’t think through. That, uh, that initiative. I had heard about the social media part. I didn’t know about the relatives, giving up the relatives to come, to come visit for a, uh, uh, uh, go to a, A week in the Pacific Northwest with your family. See, I picked Oregon. I picked the Pacific Northwest. I didn’t say no, you go to New York City for a week. I didn’t even say go to San, I didn’t even say go to San Francisco for a week. Both of which are fine destinations, but I chose the Pacific Northwest. All right. Can I bring us back to data as I always do. Um, but something that I, I know Gene spoke of very briefly earlier, and I think is maybe prepared to speak on more, um, and that we did talk a little bit about when we, when the three of us previously met a couple of months ago, but Part of this contraction in the, in the sector, organizations losing funding or, or feeling unable to operate or being attacked and, and closing, you know, there’s all different reasons, but there are continuing to be organizations that, that close. And a piece of that from N10 side of things where we’re looking at the technology and the data is what it means for Program effectiveness, to know that these models did work, even though that organization is closed now. To know what data that organization had been seen in that county where they provided those services. And now that data is gone to, to again, say what was the level of need in our county before, right? And that’s not to say that I think You know, everybody should just sell or turn over or give away a bunch of data about people. But I do think that as organizations are thinking about dissolving or closing, or kind of letting go of their own independent organization and becoming a program of another organization or something that What to do with, with the data, not just constituent data, but like, your impact data, the proof that your programs were effective. All of those different pieces, I hope can be part of those planning scenarios so that we could say, OK, well, we’re closing and we work on housing in Clackamas County. There’s another housing organization. Again, we’re not turning over people’s names and addresses, but could we at least transition data that shows These types of programs have these types of effectiveness so that they continue the work and, and that we don’t lose on the knowledge that is ours as the community members in that county, that is our knowledge. That’s our data. And if an organization closes and deletes the, right, then it’s gone. And, and really thinking about that as a public good, that there can be places where we continue to hold that. Story of your work, but also the kind of impact evaluation program data anonymized and whatever, but I’m, I’m really seeing already the impacts of losing that information in communities. Yeah, that’s terrible. Uh, I mean, what a loss of, uh, of institutional knowledge and community. Yeah, you’re right. The data is of and for the community, right? How would we as community members advocate if we can’t point to the data that said these were services we used, you know. That’s where the boards need to come in as well. So if you let your sort of organization operate till its very last dollar and. There’s nothing left to sort of. Create that transition because there is a cost to this. You need people to help you sort the data, get it over, move it, transfer it, protect the private information. Like this all takes time. You can’t do this when you’re on your last week of operating funds, right? You have to make sure you’re paying your staff. You can’t tell them at the end of the day, oh, we ran out of money so we can’t pay you. And there are all sorts of additional problems that would happen there too, but Organizations at this time, there’s so many that that are kind of in this zone of insolvency right now that hard decisions need to be made. And so that’s kind of just another big thing. The other thing I wanted to get across really quick, Tony, is just because we started with risk mitigation. I think I threw us down different rabbit holes. But do the easy stuff. Make sure your filings are in on time, like document your board minutes, you know, um, your meeting minutes, um, make sure that you have them. If you’re doing stuff that you’re not quite sure could lead into issues, explain before anybody has audited you why you’re doing something for charitable purposes, why you’re doing this for your mission, so somebody’s not later accusing you of, yeah, you’re funding this illegal sort of demonstration and you’ve intentionally funded trespassing and all of these other violations. You know, if you have it in your file that goes, this is what we were funding. You know, a peaceful protest, that’s, you know, if you have those in your books, audits and things go by so much easier and it’s not a regulator going, you just made that up because we asked for it. No, it was already in your file. So just a quick few steps on risk mitigation. Yeah, that, that brings together all the, I mean, all the areas you talked about, uh, at the opening, Jean, you know, political, legal, financial, public relations. And they all, they all spin out of 11 of the, one of the four can lead to all four being, A, a crisis at the same time. You know, something political has legal implications, which gives you bad press and your donations stop, and there’s, there’s all 4, you know, all 4 implicated. Um Do you wanna, do you wanna talk, we have, we have some more time, Gene. Do you wanna, you wanna talk more about these, uh, you know, uh, collaborations on the positive end. Acquisitions maybe on the, on the other, on the opposite end and maybe mergers in the middle. You know, and, well, closing, closings would be at the, the, the closing like to your last dollar. That’s the, that’s the bad end of the spectrum, and then, and then it continues from there. I’m sorry, Amy, please, and I just wanted to add to your kind of Spectrum of scenarios that genes may be offering some insight to at N10, we continue to get phone calls from folks who aren’t necessarily identifying one of those scenarios. They don’t even know what scenario they would look at. And they’re, they’re calling because they’re like, we’ve been put on a list. What are we supposed to do? Like we didn’t, we didn’t do anything to get put on this list, but now we’re on this public list and I’ve of course given them the technology side advice, um, but I’m curious, again, yes, those scenarios Tony outlined, but also people that don’t even know if they’re facing one of those scenarios, because they’re really just coming to this as Well, now our organization is being targeted. We we’ve been put on a list. What are we supposed to do? Yeah, it’s, it’s great. It’s a great question, right? And there are all sorts of lists that are out there, but ultimately, you know, at the end of the day, there are probably only a few 100 organizations out of 1.9 million that are on some list that has got the attention of someone in power, so. Um, there are a few organizations out there that, that have the ear of, of congressional members, and, and they’re scary. It’s scary to be on those lists or on lists of, you know, letters coming off from Representative Hawley or, or, you know, some senator’s office. Like those lists are scary, um, but right now they’re mostly sort of. Will you give us this information request letters? They’re not even like you’ve done something wrong, we’re going to get you letters like the rhetoric that sometimes comes out of President Trump’s mouth about like you know we’re going to go after these organizations because they’ve done something illegal. There are actually laws and there’s a whole bunch of bureaucracy to be able to take you know a 501c3 status away and. Taking 501c3 status away does not freeze your funding or prevent you from operating. Um, so there are a lot of misconceptions out there. The freezing the funds and stopping you from operating are largely state level, you know, actions. Now there are a lot of states out there that may not be friendly if the federal government is pulling your 501c3 status away. But there are many other states where you probably won’t expect the same type of repercussions if it was a political, it’s clearly a political reason why they tried to take your 501c3 status away. And when I was speaking about the bureaucracy again, oftentimes when we speak about federal bureaucracy or The IRS is like such a headache. It takes forever to get anywhere. Now consider this with a huge loss of staff members and a lot of expert staff members and some portion of the remaining members who are a little bit resistant to what the federal priorities are and think about how dysfunctional that may end up being. So if you’re on a list, what is ultimately going to be the, you know, the outcome of that list for most organizations, nothing, right? For most organizations it goes nowhere. They’re going to try to scare you, they may ask for documents. You may not give them to them. Will they follow up on it? Sometimes they will, but they don’t have a lot of staff. Um, so like if you gave them 1000 pages of documents or even 100 pages of documents, the likelihood anybody reads them is like really small. What about their algorithms and stuff? Their systems are super antiquated, which is, again, reasons for criticism in the past, but like that, there’s some good side to that right now as well. If you’re, so, um, one thing is. We can’t live in complete fear. Some organizations may get targeted and may go down, but it’s not the vast majority of organizations that are worried about it that are actually going to go down. The ones that they’re going to target, probably a few big ones that have the wealth to fight back, and then just some random small ones. But the easiest ones to pick off are the ones that have low hanging fruit, and by that I mean they forgot to register in time, so Texas decided to take away their right to operate there because of that, all of the crowd funding platforms say, oh, you’ve been taken off from this state because you did not comply with their laws. You can’t use our platform in multi-state situations anymore. Other states could follow as well, so like don’t give them low hanging fruit like late filing. They’re probably not going to get you because you said DEI on your website or that’s part of your mission or abortion is in your mission. All of those things are legal, right? So what is illegal DEI? Well, employment discrimination is illegal DEI, so you can’t say I’m just going to hire a black person or I’m just going to hire a white person or an Asian person or whatever it be. You can’t say that. There are some private actions that we talked about before about making and enforcing contracts, the Fearless Fund situation, Tony, if you remember, kind of private. Funding and I think Ed Bloom has gone after someone else now. Ed Bloom funded the Harvard UNC Supreme Court affirmative action litigation that ended affirmative action in higher education and admissions. Yes, he is continuing to fund organizations that fund plaintiffs. They look for plaintiffs to sue. Not just nonprofits but for-profits or anybody that has any sort of affirmative action type program that uses a contract, so not using contractual language in those type of situations can really help, uh, but they don’t have the resources to suit everybody. So again, like if they’re. 100,000 organizations that that have these type of programs, yeah, maybe 5, maybe 10 get targeted. So are you going to stop pursuing your mission because of a 0.001 chance that like Ed Bloom’s gonna get a hold of it and. Probably not, and but you can take steps to avoid the risks. So, um, just sort of be on the look on the lookout. There are a lot of resources out there, and this is why collaborating with people and saying, hey, what good resources do you have out there? Like those things are really. Good just to say, oh, the National Council of Nonprofits, they’ve got some good resources, the Alliance for Justice, they got some good resources. There are, if you Google nonprofit legal defense, you’ll find a bunch of good resources and TED, fantastic resource, right? So there’s a lot of organizations out there that can help collaborate so you don’t feel like you’re by yourself trying to find every resource by yourself. Amy, I’ll give you the Give you all the, the final word because Gene, Gene opened us. So yeah, I just wanted to build on what Gene said and reinforce, you know, going to your state nonprofit association or the national council or You know, call the Boulder advocacy support line or, you know, and 10 groups. But one piece that I think you’ll find no matter where you turn, is that folks will say, you’re not alone. Hey, we have a whole community of folks just like you. And to your point at the beginning, Tony and, and Jeane too, like, All of us are probably scared, and if you’re not scared, you’re not paying attention or whatever, you’re right. But that we don’t need to be alone in figuring anything out. We are stronger, we’re calmer, we’re better, everything together. So don’t feel like, OK, I’m gonna find a resource and then I’m gonna keep it. If you find a resource, make sure you turn around and tell somebody else, right? So that we are constantly helping. In a networked community way, because that’s how the most number of our organizations will survive. That’s how the most number of our communities will continue to have access to our services. Like, No organization is alone in, in anything that you’re facing, and none of us can help you face it if you are, if you think you are alone, right? The Amy Sample Ward, our technology contributor, CEO of Inten. With them is Gene Takagi, our legal contributor and the principal of NEEO, the nonprofit and exempt Organizations Law Group. Thank you very much, Amy. Thank you very much, Jeane. Happy New Year. Happy 2026. Next week, be human and be yourself for best fundraising. If you missed any part of this week’s show, I beseech you, find it at Tony Martignetti.com. Happy New Year again. Our creative producer is Claire Meyerhoff. I’m your associate producer Kate Martinetti. The show’s social media is by Susan Chavez. Mark Silverman is our web guy, and this music is by Scott Stein. Thank you for that affirmation, Scotty. Be with us next week for nonprofit radio. Big nonprofit ideas for the other 95%. Go out and be great.

Nonprofit Radio for October 6, 2025: Your AI Brand Footprint

 

George Weiner: Your AI Brand Footprint

What is this thing, why should you care and what can you do to improve it? George Weiner returns to acquaint you with his company’s study of how Artificial Intelligence will influence giving in Q4. Then he explains the implications of the research, including that last year’s content strategy is obsolete. He also brings tactics for you and your content to get the recognition you deserve from Google Gemini, ChatGPT and their colleagues. George is Chief Whaler at Whole Whale.

 

Listen to the podcast

Get Nonprofit Radio insider alerts

 

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

Welcome to Tony Martignetti Nonprofit Radio, big nonprofit ideas for the other 95%. I’m your aptly named host, and I’m the podfather of your favorite hebdominal podcast. Oh, I’m glad you’re with us. I’d suffer the embarrassment of Onicotrophia if you nailed me with the idea that you missed this week’s show. Here’s our associate producer Kate, to give you the highlights. Hey Tony, here’s what’s up. Your AI brand footprint. What is this thing? Why should you care? And what can you do to improve it? George Weiner returns to acquaint you with his company’s study of how artificial intelligence will influence giving in Q4. Then, he explains the implications of the research, including that last year’s content strategy is obsolete. He also brings tactics for you and your content to get the recognition you deserve from Google Gemini, Chat GBT and their colleagues. George is chief whaler at Whole Whale. On Tony’s take 2. Hails from the gym If she can do it. Here is your AI brand footprint. It’s a pleasure to welcome back George Weiner. In 2010, he founded Whole Whale, a top 100 nonprofit focused digital agency supporting analytics, advertising, AI capacity, and digital fundraising. George is chief whaler. He’s also the co-founder of Power Poetry, the largest teen poetry platform in the US, a safe, creative, free home to over 1 million poets, and CTOs for good. A group of tech leaders at nonprofits that delivers social impact primarily through technology and digital strategy. You’ll find whole whale at wholewhale.com. You’ll find George on LinkedIn, where he is very active. Welcome back to nonprofit Radio, George Weiner. I’m, uh, I feel like we haven’t learned a lesson. You keep having me on. I’m honored every time I get the invite. I was like, wow, I didn’t mess this up. Thank you. Yes, no, you, you’re, you’re, you’ve earned a repeat, repeat appearances, absolutely. Um, you know, I, so I was happy to, uh, read the bio that you provided, but I don’t, I don’t think it captures. I don’t, I don’t, it’s not the bio that I would write if I were you, because, you know, you have this enormous tech background that you do go on, the bio does go on, which I did not mention that you were chief technology officer, I believe it was for 7 years at dosomething.org, which is enormous, enormous, turned into the enormous data capturing. And uh uh assistance for activating young folks, that’s enormous, but I would, so that they’re not that that that is to be minimized, but I still don’t feel like this all captures. I mean, you’re, you’re the, you’re a tech guy who understands it, explains it, uh, simply talk about like, I mean, you, you get in the weeds of tech. I mean, you’re like a coder. You write, you write lines of code. I do, I have, I’ve come in and out of it, interestingly, I used to be very actually like in the technical writing code and then I hired people smarter than me to write much better code and then I came in and out a lot of data, analytics, advertising. I love learning, I love understanding and then. Helping nonprofits find the angle, right? Like how do we leverage this thing? I’ve I’ve studied the whole book, just read page 17 and do it this way is what I love kind of getting at and recently I’m very heavy into coding again, but frankly with AI assistance um and and building up calls writer.AI, which is a great customized platform for nonprofits creating, um, creating AI generated content and I do it in as safe a way as possible. Exactly. Uh, yeah, so that’s kind of what I wanted to, uh, I’m glad that you, you mentioned. I wanted to get to a little bit, Cowriter.AI, a proprietary. Whole well created. Is that right? right? Yes, I, yeah, that, that guy, that’s the coding that I was referring to. Maybe you hired smarter coders to do better at it than than your initial cuts, but, uh, which. Nonprofits can use and train. Cautiously on their own, have the, have the model trained cautiously using their own content. Exactly, yeah, for you, not on you in the sense that we build it so that there is a central source of truth that is stored and protected for you that can then be pointed at any model and then custom built prompts and guidance based on what your team needs to do today, and we delete every chat every quarter and pay for carbon offsets on all of the queries uh that are conducted to try to at least approach carbon neutral as. Uh, difficult as that calculation is. Well, I admire the attempt. Um, you’re also a whole well certified B Corp, so you’re, you’re, you’re committed not only to the environment, but also to the, uh, to the causes of, of, of social impact. We’re trying. I think I’m really excited though about the upside of AI while still like having my little Tony voice in the side of my ear being like, Well, but what about, what about like this, it’s gonna steal the content here, it’s gonna cause this, and I feel like this um AI study that caught your eye sort of like really walks down this tiny tightrope of I’m excited, but also I, there is caution to be had for some of the insights we found. OK, we’ll, we’ll get to some of those, but I’m glad that Tony voice, um, accompanies, I’m not gonna say haunts you, accompanies you. The Tony voice accompanies you because I do have my concerns which I’ve, I’ve shared with Beth Kantor and Amy Sample Ward and then you, um, so listeners are acquainted with my. Uh, skepticism, concerns about the, uh, the, the widespread adoption of artificial intelligence and, uh, large language models, so. I’m glad my voice is accompanying you. Yeah, and I like, I like the, the, it’s like respectful challenging too when you come at it. I think there’s some folks that shut down arguments and be like, you’re wrong, I’m right, you’re like, you have this very like clever way of getting in my head on, you know, the edge cases that come up and I’m like, damn it, how am I gonna answer this for uh for for this type of argument and it’s important. OK, thank you. I’m glad. right. Um, the AI brand footprint. Uh, you study? Because uh you’re concerned that uh AI is gonna have an enormous influence over giving decisions in the 4th quarter that we are now in, right? AI. AI’s influence on giving. See, my questions are so you can’t even, where’s the question? The guy talked so long. I, I, where’s the kernel of the question? Yeah, you’re concerned about the, the artificial intelligence’s influence on our millions of giving decisions this, this, these 3 months. It is going to be an unprecedented influence for AI on philanthropy, and that is maybe a hyperbolic way, an excitable way of saying that AI continues to grow in its answering of questions we have. And as that happens, one of the questions that comes up in Q4 is what are the best animal nonprofits I should give to? What are the best ways to support mental health and youth for charities I should donate to? Where should I give for the most effective cancer fundraising efficacy fill in the blank of your cause? Those questions are going to be answered. In the millions of numbers controlling billions of dollars. I thought it was worthwhile to take a quick look at. The data behind that. OK, so you looked at 6 different Large language models. I’ll let you name them. I have them listed here. Oh you want me to name them? No, no, no, I can go through it. It’s like a lot of technical, um, but I know there are some, some geeks among us and so here was our methodology. Um, we looked at 12 different cause areas and then for each of those areas, we chose 10 sort of iterations on the types of questions. A potential donor might make alongside those. OK, the next step is where should we test this? Should you just choose one model in the corner of the room? No, anything worth doing is worth overdoing, as I like to say. So we looked at 6 major models that kind of comprise the landscape as we see it, Gemini, OpenAI, and Tropic, meta Grok just in case, and um, you know. From there and also sorry, perplexity and from there we then literally send requests to all of them multiple times, get their responses back, and then analyze them. There’s a number of reasons for doing this, but on one level, we also have to understand this is a probability. It isn’t like Google search rank where I can definitively say you’re #3, you’re number 1 like. That’s not how this works. It’s like rolling dice each time, but when you roll dice enough times, you get that little nice little bump, the Gaussian distribution of a lot of things here and then out toward the sides less and less. I wanted to understand a bit more about the distribution of nonprofits being recommended in each of those areas across those models. How many flags of jargon am I gonna get thrown at me right now? No, that’s OK. No, you’re all right. Um, I do want to clarify, Gemini is the Google, uh, the Google product, uh, just to make that clear for folks, uh, and Anthropic, uh, is named Claude. I don’t know if folks might know that it’s Claude or Anthropic. OK, the other ones I think people are familiar with X is Grok, etc. um. OK, no, no, you’re OK, jargon jail. I’m, I’m listening. Uh, I’m, I just don’t wanna go to jail too. I’ll let you know. I’ll let you know. All right, so that’s a lot of questions because you 12 cause areas times 10 questions, different iterations of questions times 6. So 12 times 10 times 6 isn’t that something like 720 or so? Different over 700, but more importantly, like we’re talking about millions of words being analyzed by the end of the day because each of those prompts comes back with a bunch of texts that we then have to parse. Uh, and then from there what we did is we have to figure out like, all right, you know, what is the there there in this prompt and we began to go by counting the number of mentions of unique nonprofits as well as sources of influence. What does that mean? It means a lot because who did the AI point to as, who according to whom is this charity? Verified quality of donation. This is how we get to our first action step because there are some very clear, clearly prominently mentioned influencers that because see this is the value, uh, it’s more than, you know what, we’re not even gonna talk about who was the top named mental health cha or or the most common named animal welfare charity. We’re not gonna do that because I don’t think that’s where the value lies. The value lies in recognizing. Now see, I’m trainable, George. I’m trainable. Uh 63, but I’m still trainable. The, I think the value in this study is, is the, the threshold recognition that artificial intelligence is Capturing you, it cares about your work and, and what does, how does it learn about your work? What does it learn about your work? What does it say about your work when people ask either specifically about your charity or generally in your cause area or any other, any other reason you might show up in a, in a, in a search result? AI cares about. AI is paying attention to, and I don’t know, cares about is maybe a little overstatement, but AI is paying attention to you, to your work. OK, so let’s go to our, uh, our, our first, our first real, real takeaway, the influencers. I’ll let you name name the most common influencers that nonprofits have got to be, if they wanna be, if they wanna be thought of well by by these large language models, they need to be. Uh, um, approved by or have high standing with, who were these influencers? Yeah, I, I think this is an interesting way to answer this question is actually to give you a section of one of the responses from one of the prompts from one of the models, and I just want you to consider the implications and the prompt I gave this model, and this was a Gemini model was where to give for animal organizations like it’s kind of disfluent and sort of how I put it together, but where do I give is the central question I asked. And there’s one prompt that came back. It said before donating, always do a little research. Charity watchdog sites, these sites evaluate charities and efficiency and program spending. Charity Navigator, GuideStar, now Canada and BBB Wise Giving. Check their websites for their mission statements and how they use donations, local reputations, and finally, financial efficiency. A good rule of thumb is at least 70% to 80% of their budget should should go directly to program, not administrative costs or fundraising. You’re right. That’s instructive. All right, so that, that’s a pretty, that’s a, that’s quite a good answer. I like that answer. Yeah, I think unpacking it sometimes though is that what you’re hearing are, it is not going to primary source, it is going to evaluation platforms, your charity navigator, your guide star, your BBBY Giving Alliance, those public profiles matter more than ever. They mattered before, but they matter in your mind. You’re like, oh, you know, like a donor’s gonna like do some research and go check it out there, like, no, no, no, no, no, this is being baked in. Up front, before someone even finds you, they’re finding what other people think of you on these sites and others. It’s different. Huge takeaway, huge takeaway. um Give Well was another one that was named another influencer, yeah, no, but you were, you were quoting from 11 prompt out of that one response out of 1700. So, um, OK, Charity Navigator, uh, guide star, which is now Candid, Better Business Bureau, Wise Giving Alliance, Give well. Take away number one, you, you’ve, you’ve got to be thought of well, you’ve got to be well ranked. Not, not quite for the reasons. I mean, the, so, I mean, George, the reasons that we thought they were important still exist. There are still people who, uh, my dad before he died used to get the Better Business Bureau, Wise Giving Alliance printed guide. And he would check, check to see if uh a charity that sent him mail was listed in the printed guide. So I’m not sure people are using the printed guide too often anymore, but They, they are looking, the people who do go to your site are looking for those little, those little badges, the, uh, the, the high ranking badge, the platinum for Charity Navigator, etc. But now, even more so. The large language models are using the influencers, like you said, George, the 2nd order. Recommendation or, or, or evaluation sites, not recommendation, evaluation sites. In their, in their, in their responses. Yeah, and those types of giving guides, third party validation, there’s a lot to unpack there, but it’s not just, hey, let’s update our website, it’s go there. Another sort of nuance here were um mentions of GoFundMe. I was kind of curious when I was looking through the data of like, alright, how much of this is gonna be like. Find individuals and go that route and what we found was based on the mentions of GoFundMe and the request of like how do I help let’s say um youth mental health or poverty issues, right? Where should I go, how should I help? Like it’s an open ended question on purpose. We, we did want organizations we wanted what is it and how is it advising and for GoFundMe mentions actually, uh, 70% of those mentions came from Grok. So in the land of Grok, which is Twitter, which is X, which is Elon Musk, just to get all of the bingo cards there. It is disproportionately recommending to go to GoFundMe. It also recommended charities, mind you, but I found that interesting. Part of that is in the training set and part of that is in the sort of, you know, we do our own homework but to a bizarre degree in there. So, so Grok was looking at the, the frequency of GoFundMe campaigns for charities as a way of determining whether they were a good place to give. They first off, are meant we are meant, we are analyzing its responses across all of those causes, right? Every single cause, every single chance. So across all of those tested areas, we were seeing that it just surfaced that the user should go find and look. For someone in that cause area to donate to on GoFundMe as a point of helping that particular cause. Why that happens, you can speculate, but actually, you know, it’s training data set. There’s a lot of people that go on to X saying, hey, I need money for this, go, you know, fund me and GoFundMe, and that type of link and that type of cause might be overweighted, plus potentially an underlying, um, frankly baked in mistrust of institutional organizations. That’s a bias. Grok Grok had a, had a bias for GoFundMe campaigns. They all have a bias. Right, let’s talk about some of the biases, yeah, because, because this gets to what The large language models think of your nonprofit in the, in, in some of these biases, and then, of course, we’re gonna talk about what to do, how can you Enhance your AI brand footprint. In, in light of what we’re learning from the survey. So let’s talk about some of these biases like size, the, the, the ones that I, the ones that I saw, the, the, the results, these were. I don’t know, maybe not. 100%, but these were very large charities. They had large digital footprints. Yeah, in in most of the cases where we then do a sort of top 10 breakdown, we have a like a herding effect where it is the the larger, more well-knowns, uh, are at the top, you know, you look at the environment we end up with, uh, just using that as one example, the environmental mentions by model like on Gemini, the top two are environmental defense fund and the Nature Conservancy. And then on Anthropic, which is Claude, World Wildlife Fund comes up first, and the Nature Conservancy, Nature Conservancy wins on Open AI uh on Grok, World Wildlife Fund wins on Perplexity, the Nature Conservancy wins. So you see, it’s like the sort of jockeying, but those are massive organizations and you have to go down pretty far to you get to something like an Earth justice or Oceana or um. I’m trying to find like Arbor Bay Foundation, which is just not small, uh, defenders of wildlife, you know, coming in at the tail. Those aren’t the household names. Nature Nature Conservancy, World Wildlife, or well, yeah, um. All right, what other, so size, so, so our listeners here are in small and mid-size nonprofits. They are likely not in any of the environmental causes you just named specifically, and they are very likely not in any of the biggest names in any of the 12 cause areas that that you evaluated, but. I hasten to add there are things you can do. We’re gonna get there we’re gonna get there it’s coming. The fear not, as Grandpa Martin Eti used to say. Nunjawari In his, in his New Jersey Italian accent, Nunjawari, there are things you can do. It’s time for Tony’s take too. Thank you, Kate. I have a new tales from the gym. There’s a woman who’s been coming to the last. 3 classes, uh, that I take every Tuesday morning. It’s the only class I take each week, just, I just go to this one class. And she’s, uh, she’s quite active, she does, it’s an aerobics class, she’s does all the weights, she does most of the moves, you know, like we’re stepping back and forth or side to side, things like that. She’s, um, she moves pretty well. I, I, I have my eyes on her because uh she stands right in front of me all three times. She’s been right in front of me in class. And the thing is, you know, she, she’s got all this activity for the hour long class. But she’s on supplemental oxygen. She’s got a tank down next to her and the tube and the cannula in her nose, the whole class. So there’s some things she can’t do like she can’t step too far forward or back or side to side because the tube isn’t that long, but she moves enough and she keeps up. You know, like, yeah, I can’t help but see her. You know that just makes me think, I mean, if the woman with supplemental oxygen. Stays energetic through this hour-long aerobics class. Anybody, almost anybody could be working out. There, of course, there are people that are. More compromised than just supplemental oxygen, but. You know, anybody who’s not on, on oxygen, uh, we all could be working out to some degree if if this woman can do it. So she’s, she’s, um, pretty amazing, pretty, uh, uplifting and encouraging. That’s another tale from the gym. That stories take two. Kate I don’t know if you follow like gym, TikTok or anything on like Facebook or Instagram, but now I’m seeing a lot of fitness instructors coming up with different um workouts that are seated for people with maybe mobility issues or any sitting down um disabilities. Um, but that’s great that she can get up and do it and enjoy it and still be active with something that’s probably heavy, you know, to carry around. Yeah, well, the tank is, but it lays on the floor. Yeah, I, I haven’t seen any of that on TikTok or Facebook or online, but I, I have seen chair yoga. Which is, that’s for older folks who do have mobility issues. Balance, you know, balance could be an issue. Uh, there is chair yoga out there. We might, I don’t know if we do a chair yoga class in my little beach town, Emerald Isle, but I, I’ve seen chair yoga. But yeah, the woman is, um, is greatly uplifting. Yeah, she’s uh. She’s moving. It’s awesome. We’ve got Boku butt loads more time. Here’s the rest of your AI brand footprint with George Weiner. Other biases, what, what other biases did you find in our uh Our large language model friends. Our large language model friends, this kind of surprised me. I don’t know if you can classify it as a bias, but it kind of getting back to the herding effect, but in the number of charities mentioned, like the unique number of charities mentioned really surprised me between the cause areas that we go between. So something like the environment or poverty, there was a range like unique nonprofits mentioned across all of our sampling of 127. OK, I then go to something like cancer. And there’s a 30 organization spread that is massive. I was shocked. I was like, shouldn’t they all just probabilistically find a large number of things? There’s extreme herding in certain cause areas and action step that you might use here is actually do this type of research for your own backyard. Abstract, pull back and say what kind of question might somebody who’s caring about my cause area and my locality put in to find the most effective, best run, greatest places to give for X, and see who shows up, but remember you want to do this on a like incognito, not influenced by your own GPT if you are paying for it, I’ve trained it, you want to actually have it from a cold start. That isn’t biased by your bias and your information because then you’ll be like, obviously you’re the best mirror mirror on the wall, like, hold on. So does that mean you shouldn’t even do it from your your nonprofit, your your office browser? Is anything that’s passing in information to it, like geographic is fine, but I would say any other things that are passing information are essentially tainting it and tilting it toward something that’s more relevant to you when in fact, You want something that is better reflective of the underlying biases and approaches of that model to sort of explore it. We actually use direct to the API calls so we know exactly what information we gave it and exactly what we got back and could kind of like clean out the clutter, so to speak, of any customization going on or cache information or browser um influencer. Here’s a tip. Actually, no, here’s a very, very hard tip, and you’re like, I don’t know where to start. I want you to go to the site open router.AI. And that will let you just mess with whatever model you want and see what happens with you getting full control over the data that are being sent. Router. What does this do? Just look for router. Yeah, yeah, yeah. OK. All right, so that’s a, are you saying that’s like a safe place that that’s it’s less likely to be tainted. It will let you test different models side by side in a way that is safe in this land and gives you control over more of the variables and it’s really like kind of elegant for side by side comparisons. OK, OK, very good. All right. Um, All right, so Let’s move to, um, you know, what, what small and mid-size shops can do. Now, part of what they can do is what you are doing at whole well with AI brand footprint. Which I am helping you to do. I am propagating this for you. So explain what I, go ahead, you flesh out what I just said that I know you’re doing that we are helping you with. Uh, by the way, thank you for reinforcing AI brand footprint. Why we keep using this term is because it is a concept that we created at Wholeal to explain what’s going on when you’re discussing this ecosystem of information that AI is talking about and representing your brand on. You’re not getting a lot of data about it. But you know it’s happening. You’re starting to hear maybe somebody in development department be like, you know, they heard about us from, uh, they said from chat and it’s like, oh, that’s interesting. There’s a whole ecosystem out there and we’re just sort of, uh, scratching the surface of it with this study, but you should begin to care about what that footprint looks like, what’s influencing it, how big it is, if it is growing, or if it is shrinking. Why I am focusing on and why I love the fact that we’ve mentioned it like 17 times now is because we are trying to put this concept out there, imbue it with meaning and connect it to us so that when, not if, and it already happened. That Google overviews, that’s the little AI answering when we do Google searches, talks about this concept. It attributes it to us, it surrounds it with our lengths and our language rather than literally stealing it and just throwing it into the soup that is the overall. Uh, you know, word salad of the AI systems. So to pull back, how do you imbue with meaning, own concepts and encourage attribution? There are some number of tactics, but that is the game we are currently playing and laughing about as our inside joke here. Exactly. So we’re not gonna leave. Nonprofit radio listeners, you know, wondering what are some of these tactics? How can you Create something around your work that’s unique, will be attributed to you when, as you said, when not if the, the, the large language models find it. How do you How do you identify all this, bring this all back to you and your site, your work? Because you’re doing this is exactly what whole whale is doing with AI brand footprint. There I said it again for you. It’s gonna be in the, it’s gonna be in the transcript. It’s gonna probably be in the show note. This what, what, what tactics can we all learn from what you’re doing at whole well? So the content that we used to write for AI engine, for for SEO search engine optimization is dying. The idea that all I have to do is answer a question accurately and I’ll get credit is dead, because also if you realize that the AI could answer the same FAQ question, generic, hey, how many. Uh, how many ounces are in this amount of thing like that information has been commoditized and will no longer bring you traffic. Your 10 facts about this issue is not going to work anymore. What will work are first party data. What kind of information can you bring to bear on this topic? Have you surveyed your audience? Another way to think about it is according to whom? Is there a testimonial, a statement that can be attributed to the CEO, to the founder, to the stakeholder who received the service, because when AI comes and summarizes and takes that content, it actually is sensitive to uh trademarks, first party data attributes. and saying, oh, according to the local animal shelter or youth center leader that this is the thing you’re like, oh, that’s tied and anchored now to something that the AI will respect. I think there’s a lot to unpack in there, but hopefully you begin to see the nuance. This is all about optimizing content. For the, the AI tools, right? You’re, you’re, is that, or this is a subset of optimization. It’s a way to make sure AI respects the source attributable to the content it has scraped and taken from the site. More and more we’re going to see increased traffic from these AI bots that are coming to, you know, answer somebody else’s question and maybe they show where that information came from and maybe they didn’t. And in doing that, these are the initial phase of tactics when you create your content and also, frankly, it’s putting The human back in the content, like the stuff I think we may look back at writing of uh the how to like tie your shoe content like because it gets traffic was not that relevant to our organization like the, you know, 15 cutest cats to promote our thing really was only getting people on a very high level to maybe browse through our site. So this is hopefully return to real content. OK, but that had value. Um, the, you know, the 15 cutest cats. All right, well, well, let me take a look at this shelter. Maybe this is a, if it’s, if it’s a local place, maybe, uh, maybe I’m looking to adopt, maybe I’m looking for a place to volunteer or obviously maybe give, so that, I mean, as, as a, as the beginning of a pipeline. That had value. It probably still does have, it does still have value, but what, you know, what you and I are focused on is the, the AI. Evaluation of your work. And from that perspective, the 15 cutest cats. Not valuable. It won’t drive attention. I mean, we are still humans here. This is where Tony voice is like on your shoulder. No joke, like cats. OK, OK. However, the fact, the nuance, the difference here is that you won’t get the attention you used to for that article. It used to drive attention. Without attention, you’re not gonna get the 1% of those people sticking around and giving you their email, of which 1 out of 10 makes a donation. That flow of traffic has been severed and is in the process of being severed, and you can see this action step. Look at your organic traffic year over year. It is at best flat if not going down. Even the smartest folks playing the game are publicly saying we’re in trouble when it comes to organic traffic. So that game is like, we’re on the decline. Uh, to come back, does that make sense? Yeah, yeah, it does. Um, that is, that’s what I’m talking about, it’s a source of organic traffic. But I’ll play the cute cat game if you want. So let’s say I’m sitting there and I’ve got like a a cat shelter, like, OK, uh, how could I turn this into something quotable, something from, uh, AI won’t steal or if they do, they’ll attribute it to me. Well, maybe back to QAs, I could actually do a study of over the past quarter actually. 70% of the cutest cats, based on my cuteness score versus ugly, cute or ugly, were adopted, yet 90% of ugly cats don’t get adopted. And suddenly I’ve got first party data that you collected, you surveyed, you did it is wildly interesting because I’m like, what’s an ugly cat, right? And now you’re playing the game because when AI comes it’s like according to the Long Island local cat shelter, ugly cats don’t get adopted at a rate of 90%. That’s right, that’s a very great. OK, OK, that’s an excellent example. of, of this, the tactic that we’re talking about. Please, if you go out there and do an ugly cat survey on a rate of adoption, please send it to me cause I think I’d be curious. What’s an ugly cat is the greatest question. You could you could use AI to make the determination so that way it would save you from the whole like internet fallout like according to this AI and how cute is this cat from 1 to 10? You can blame it on AI. OK, you, you mentioned some other things, uh, testimonials, quotes from the CEO, you know, all again, all it’s first party attributable to you, so, so then in and I’m gonna use Google too because that’s the, the primary search engine that people use. So in that Google summary. Powered by Gemini, right? This is all Gemini results that we get when we see the Google doesn’t say AI summary or something like AI overview AI mode. They’ll change it next week, so fine. It’s Gemini. It’s, it’s Gemini, correct. That’s the underlying model you’re correct. So we, we want to get attribution. I mean we wanted to say from whole whale from the Long Island cat shelter. Correct. You wanna be the authority that pops up on the side so you do in fact get a potential click and brand impression associated with the topic of someone saying like uh why do ugly cats not get adopted? Where do we put these testimonials in quotes? They’re basically embedded on your site, right, writing your content, right in line, um, you can use quote tags, you can use uh what’s called schema markup to make sure that when the AI is reading it. Uh, it is more quickly attributable, and in those testimonials, like you can just also just make up quotes like I now have to come up with some sort of clever thing to say, and I think for the AI study, I literally put in there I was like I have to come up with something. So I was like, alright, this season, AI will determine giving more than any other in history, like it’s a little. Over the top and like a pretty safe statement but I said it, I quoted it, I shoved it in with a quote tag inside of there and now that’s one of those attributing factors. There’s another big thing I wanna touch on, but I wanna make sure that makes sense. Uh, it does, but all right, so remember your big thing, but you are in jargon jail for scheme, you are in jargon jail for schema markup. I’m gonna bend up there somehow. Uh, the schema markup is, uh, little HTML tags in there, so behind the scenes, like if I want to make something bold, I put a little B tag on it. Every P tag has that little spacing. H one tags make it at the top header. This is just another one of those types of tags in the system that the machines read and understand more quickly what’s about to come and what lives between the open and close of that tag. OK, very good. All right, you’re out of jargon jail and listeners, the, the person who does your website will know exactly what we’re talking about. This is HTML. AI will know what you’re talking about if you literally, this is an amazing thing. Like your expert is the say, hey, teach me what I need to know about Schema. What, what would I need for schema markup of this page? You can ask AI and it is you’re ready to go developer on that front. OK. The This is the thing that these are the things that concern me. This is where we get into creativity conversations. What was your other big point? The other big point is something that has dawned on me and it kind of, uh, it sucks, technical term, because I’ve realized a lot of my writing is being. Commoditized and is not going to be found by people, it’s going to be summarized. And what I realized is where do we go to be uniquely human, where do we go for trusted information and more and more it’s going to be, I think in audio and video, video that is harder to fake, not impossible, but much more unique, a little bit more messy and so for your tentpole content, your main focused content. You have to have a video associated with it for a number of reasons, including the fact that YouTube is the #2 search engine, including the fact that people are now going to say like, I get that all this text I’m looking at is all like AI driven. I want to hear a human say it cause at least that human had to read it before saying it and letting those words out of their mouths. And third, because it is also showing up very, very clearly in those AI overviews right below, they’re giving video answers to the textual answer that AI gives and then they are having the good old fashioned links which are going to go the way of the yellow pages, I think. So we’re talking about side by side content. Part 111 part is for the humans and the other is for, is for the AI models on our site, you’re talking about videos, videos for the humans, mostly, although AI I understand the, the AI summaries are they’re, they’re finding the video and, and promoting those for you if they’re on point. But then there’s the also, but there’s the, there’s the AI. Needed content like side by side. We’re not talking about having to, I mean. Yeah, I, I, I, isn’t that what we’re talking about? 22 different levels of content, one for, one for the machine and one for the, for the humans who do come to your site. I’m beginning to believe that we’re probably headed toward the, the text of your site is more of like a database for AI to reference and so what is it that you’re going to create that is uniquely human communicating in that way is going to be video. For now, uh, because that, you know, is, it is also getting the lift on these other platforms. It is showing your brand, your people, and your message in your words as opposed to AI summarization of its spit back out in text, sometimes it’s attributed to you, sometimes it’s not. There’s no way to sort of like just sort of like stealing video the same way you do text. So I’m really Emphasizing and have been for a while the library of content that rides alongside my written content, which you know I try a little hard out on but I, you know, make no illusion that AI isn’t literally copying it and shoving it into its system and answering the questions that used to drive attention for us. So how do I play this game? I look at the data, I look at, right, it seems that it’s still on YouTube searching. I can embed it on my page so I increase engagement on my page when people do in fact go there, which is a positive signal in the land of the Google and old SEO and current. So it is a way to think about when we’re talking about your AI brand footprint, it’s a sort of adjacent because of the way Google is showing it and the way that everyone else also copies Google, so like in perplexity, which is a. Kind of Google competitor, I think for AI based um discovery of information and searching. Um, they are all sort of playing this game of amending and appending the relevant videos they find to that topic. OK, this is um. Uh, to me, this is as revolutionary as when back when we were saying you need to have a website. Yeah, this is a big phase shift. This is the yellow pages to website shift. I I don’t make that out of just sort of a throwaway statement. It is a big change and we’re in the middle of it. And so that is also one of those reasons I try to uh rant as much as possible, but wake people up to the type of content you’re about to create. You got a content calendar. Alright, we’re gonna map out 2026 and here are the like. 36 articles we’re gonna write, I’m like, take a beat because I think if you continue to write the way you used to and create content the way you used to, you are um. You’re creating another listing on the Yellow Pages. I don’t want folks wasting their time in that way. And also this should not be a surprise, but if you are using AI to write all of your content, why do you think AI is going to surface what it already wrote or what it can already answer? On your website, it is, you know, you don’t have to think that hard to be like, oh wait a minute, the hundreds of millions of people, like 10% of the adult human population on this current planet we are on uses AI to answer these questions like they’re, you know, you’re disintermediating yourself, you’re removing yourself when you simply press copy and paste from AI and by the way, people can tell when you’ve written it in a lazy way. Oh, that’s huge. That see you I wanted to, I, I. You were, you were given like a perfect summary. I thought, oh, this is a good place to end. I’ll just say that’s George Weiner, Keith Whaler and I’m out. Then you opened up, yeah, but then you opened up and, and by the way, um, you’re, you know, people can tell, yes, yes, uh, we’re still human here. And I don’t know. I, I’m not, I’m not saying that I’ve spotted every bit of artificially and artificially developed content that’s ever come across my screens. I’m not saying that, but. There is a feel of fakeness. To artificially generated. Paragraphs Yeah, it, you know what’s interesting? The human’s ability to do pattern recognition is tremendous. It’s, it’s unbelievable what we’re able to tune our attention to and what our brain simply learns behind the scenes, like we can now smell an AI generated image of it. We can smell the AI generated text, maybe it’s the vivacity, maybe it’s the uh unusually accurate cadence of number of words per sentence. There is something that you pick up and we are all collectively building. This ability to see it and maybe I’ll take a step back if you don’t believe me in our amazing pattern recognition. I want you to think in your mind of a stock photo of the following successful business person shaking hands and you’re like, it’s like you could smell it. It’s just the way it’s image focused cropped and it’s just generic humans doing business thing. The same way you can spot stock, we can now and are building that muscle, that pattern recognition for AI slop, work slop, whatever category of a bunch of AI generated texts, which is why it’s also fun talking about this new phase of like putting the human back in, how do you do that with video? How do you do that with testimonials, how do you do that with actual first party data? Like, I used to waste tons of time writing like very, very long articles and thinking and researching. But like Google searching and pulling back information, instead, I threw all that away. I spent all of my time doing this AI study to be like, I, I get to like geek out on this topic, go way deep. And then put that out instead, so like you’re just channeling your energy of creation in a different vector. I think your uh ability to spot stock photos analogy is is is spot on too. All right. That’s I think that’s very valuable, George. Well, we were here to make sure your audience didn’t walk away with panic. I feel like a lot of conversations go to this like hand wringing, what do we do next? So you have some takeaways. You got a lot of tactics. Uh, let’s let’s give one more shout out AI. That’s the research that’s the proprietary first person data that you’ll find at wholewhale.com. Go, just give him a break. Give him, give him some organic traffic. Just go to wholewhale.com. Don’t, don’t not click through from anywhere, just, just type in Wholewhale.com and go. Do it for George. Let’s see, let’s see if he gets a burst in, uh, in, in the month of October. And you’ll find George at uh on LinkedIn. He and I are very active there together, uh, just, you know, valuable content, George, then valuable, valuable ideas here. Thanks very much for sharing all this. Uh, thanks for giving it a larger audience. Next week, HR for non-HR professionals. OK, that, that, uh, that was supposed to have been this week, but George Weiner came in and he’s related to 4th quarter. So that’s why I squeezed him in because it’s the first show of the 4th quarter. So that’s the explanation, uh, ordinarily I would blame the associate producer, but this time, just this time, uh, it was, it was my own, my own. Uh, I would say rather savvy decision, but it was my decision, however you might characterize the decision, I made it. The savvy decision. If you missed any part of this week’s show, I beseech you. Find it at Tony Martignetti.com. Our creative producer is Claire Meyerhoff. I’m your associate producer Kate Martignetti. The show social media is by Susan Chavez. Mark Silverman is our web guy, and this music is by Scott Stein. Thank you for that affirmation, Scotty. Be with us next week for nonprofit Radio, big nonprofit ideas for the other 95%. Go out and be great.

Nonprofit Radio for September 22, 2025: The State Of The Sector (Beginning With AI)

 

Gene Takagi & Amy Sample Ward: The State Of The Sector (Beginning With AI)

This year, any conversation about the nonprofit sector finds its way to Artificial Intelligence. So we start there, with our contributors Gene Takagi on legal and Amy Sample Ward on technology. Amy is concerned about our lack of security readiness and shares their Top 5 security must-haves. Gene explains your board’s duties around tech, budgeting and planning. They both see resilience as critical. Plus, a ton more. Gene is principal attorney at NEO Law Group and Amy is the CEO of NTEN.

Gene Takagi

Amy Sample Ward

 

 

 

 

Listen to the podcast

Get Nonprofit Radio insider alerts

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

Hello, and my voice cracked. Welcome to Tony Martignetti Nonprofit Radio, big nonprofit ideas for the other 95%. I’m your aptly named host and the podfather of your favorite hebdominal podcast. Oh, I’m glad you’re with us. I’d suffer the effects of chondrodermatitis, nodularis helicus. If I heard that you missed this week’s show. Here’s our associate producer Kate with what’s on the menu. Hello Tony. I hope it’s so funny. It’s that voice cracks like I’m 14. Hey, Tony, I hope our listeners are hungry. The state of the sector, beginning with AI. This year, any conversation about the nonprofit sector finds its way to artificial intelligence. So we start there with our contributors Gene Takagi on legal and Amy Sample Ward on technology. Amy is concerned about our lack of security readiness and shares their top five security must-haves. Gan explains your board’s duties around tech, budgeting and planning. They both see resilience as critical, plus a ton more. Jean is principal attorney at Neo Law Group, and Amy is the CEO of N10. On Tony’s take two. Tales from the gym. The cure for dry eyes. Here is the state of the sector, beginning with AI. It’s a pleasure to welcome back Gene Takagi and Amy Sample Ward, our contributors to nonprofit radio. Gene is our legal contributor and principal of NEO, the nonprofit and exempt organizations law group in San Francisco. He edits that wildly popular nonprofit law blog.com. The firm is at neolawgroup.com and he’s at GTech. Amy Sample Ward is our technology contributor and CEO of N10. They were awarded a 2023 Bosch Foundation fellowship and their most recent co-authored book is The Tech That Comes Next, about equity and inclusiveness in technology development. You’ll find them on Blue Sky as Amy sampleward, aptly named. Welcome. Good to see you both. Gene, Amy, welcome back. Good to see you both as well. I actually got to see Gene in person this week, which was a real treat. But your faces coming through the internet. Where? Where? In DC in a in a meeting. Oh, cool. Yeah, it was wonderful to see Amy and hear a little bit more about her family and learn, learn about things going on. um, and great to see you too, Tony. Thank you. Last time we were together was the 50th. That’s right. Yes. All right, um. So Amy You have been, uh, you have lots of conversations with funders, intermediaries, nonprofits, uh, I’d like to start with you just. What are folks talking about? Yeah, I think there’s A lot of desire for thoughtful conversation across the sector right now and, and over, you know, the last handful of months and I’m sure the months to come. And that desire for thoughtful conversation is trying to be held in a time where things feel rapidly unraveling, you know, and A few, I think patterns have been coming up at least in the versions of conversations that I’m, I’m in, whether those are, you know, 1 to 1 with other intermediary organizations, capacity building organizations, um, nonprofit service groups or, or even philanthropy serving organizations or with funders themselves, and they’re, of course, different. You know, flavors of the same dish maybe, but I think everyone really wants to hear and help and It feels like there’s not that much help happening. Um, I think when you talk to funders are presume you’re talking about. How does that go? Like you you should be funding technology, you should be funding capacity building, you should be funding. that are advocating for things or yeah, I mean, part of what sees as our kind of theory of change in the way that we make impact is of course and directly supporting nonprofit staff through training but also shifting the conditions in which all of us are doing this work. Right, so asking funders to fund adequately for the technology and data that is needed to, to deliver the programs, their funding right is part of that or, or all kinds of other advocacy, um, big, big a little a, you know, influencing thropy, and they, and I, I have to do, so they take these meetings like they don’t mind being told what they ought to be funding. Oh, it’s easy to take a meeting. It doesn’t mean you’re making you’re implementing what’s what’s the outcome and what’s the action? I realize that. But I’m OK, I’m, I’m, I think that most of the, most of the conversations N10 is entered into with foundations are not necessarily on the premise of like, can you please give us this feedback to fund a certain way, right? We just say that when we have access to. To folks that we, that we could share it with, but mostly, um, I think in these times, just like honestly in 2020 funders and other philanthropy serving organizations are asking for what we see because we are able to see into a lot of different types of organizations across the sector, not even just in the. and see trends that are emerging, see what folks are really asking for help on right in a way where we’re not having to divulge, oh, this organization that’s your grantee, they don’t know how to do this, right? There there’s not that vulnerability we’re able to share trends and unfortunately, the trends aren’t aren’t new, but, but at least they’re asking about them right now and they. are very, um, vulnerable issues. Like we are seeing incredible lack of security readiness in organizations. And as we’ve talked about on this show, and Gin has talked about, you know, there’s a lot to be concerned about when you think of a nonprofit organizations like digital and cybersecurity because It’s your staff, it’s your content, but it’s also all of your constituents, all of those people who’ve received programs and services, and if you feel that your mission and your programs and services are vulnerable, those folks in your community who’ve accessed them are 10 times more vulnerable, right? um, than your organization is, and that’s something that I think for us we just. We care about that kind of more than anything and so it really has felt like a spotlight on security and even just to um illustrate, we we can created a new program just to try to help in this way, um, a 3 month just security focused program. We had a single email that said that it was open. Um, In 4 days, we had 400 applicants from 26 different countries asking to be in the 20 people, you know, cohort, so That was, I think, validation that we were really hearing the trend and hearing what, OK, what are, what’s behind some of these questions that we’re getting? What are people really struggling with and oh my gosh, OK, we’re right, they are really struggling with security. This is um let’s, let’s bring Gene in on uh on security. You’re nodding a lot, Gene. And, and we have talked about, as Amy said, uh, as they said, we, we have talked about it, but, uh, you know, it’s, it bears amplification, because we, we all have talked about cybersecurity, protecting data, but especially as Amy’s saying, the, the, the people you’re doing the work for, if you’re, if you’re involved in a people, uh people oriented work, Gene, remind us. Oh, I’m amplifying everything Amy says, as I’m wise to do, um, but maybe I’ll just add that, you know, when people think, including funders, when they think about technology and, and some of them are just focused on AI right now, but technology is much broader than that, of course. When they’re thinking about technology, they really have to think of it as one of the core assets of an organization, and that’s not all because it’s also a huge risk and liability not only to the organization but all to all its beneficiaries and its communities that they serve and it’s communities that they exist in so it’s all of that it’s it’s even more complicated. To manage if I might venture and say this, then your other main investments which are like in staffing and in facilities like this is stuff that we don’t have a lot of experience with it’s newer things that are coming up. We haven’t learned how to manage it very well. It’s a little bit out of control. as it develops as with AI going on we don’t even know what the laws are related to this um so this is stuff that funders need to fund and organizations need to invest in really badly and when they don’t think about doing this they’re they’re really. Living for the short term at the expense of the intermediate term because it’s not even that far off in the future where these risks will ripen. They will ripen very, very quickly now. um, so that’s my two cents. And add to what she’s saying. I talked to two different, um. Funders who are who are regional funders, not national funders, and said, hey, I know the folks that are your grantees, they’re um predominantly rural organizations. They’re predominantly very small organizations, you know, single digit FTEs. There are folks that we can see in our data, not as individuals or individual organizations, but by kind of organizational demographics, are, are very likely to have really low scores, you know, ineffectiveness in these areas. We have free resources. We’re not even like asking you to fund us necessarily, like, which I should have been asking, but, you know, coming at it from really how do we get these resources available to organizations who we know are vulnerable, and their feedback was, well, security is not an issue that any of our grantees have raised with us. And I just want to pause there because why would a grantee in the vast power imbalance between a very small rural two-person organization and a funder, say we don’t have a security certificate on our website, we don’t have secure, you know, donation portal, we don’t. Have a database protect like why would they surface these would be fun? Of course they had of course no one has brought this up, right? Why would they point you, you need to be thinking beyond what was in that grant application and about really the, the safeguarding of that mission. Not only why would they admit it, but it may very well have nothing to do with, although it’s, well, it is related to what they might be seeking money for, but it, it’s, it’s grant application. Yeah, it’s not, it’s right, it’s not gonna be a question on the grant application is your, you know, do you have a, do you have a secure fundraising portal? Um, Gene, you have some advice around board like this should be at a board level, board level CEO conversation, right? Yeah, I mean it’s where it starts to get started. Yeah, and, and very obviously like technology comes up as a budget item, right, for the board. So when the boards are approving annual budgets, are they leaving any space for technology changes? Well, so many organizations, including public governments, are, are just like putting patches, right? They’re investing in patches and so they’ll patch, patch, patch. Um, but the technology is advancing so much quicker than patches can actually address. And again, The persons and organizations at risk are not only the the charity itself, right? It’s all of the beneficiaries whose data they’ve compiled and potentially like just goes beyond that as well. So it’s really, really important now for the boards to say let’s think about this as one of our core assets and our core risks and figure out how we’re going to properly budget for this item. And talking about sort of risk opportunity, you know, assessments and saying, well, what happens I, I’m a big fan of scenario planning and maybe it’s hard because these things don’t have definitions but over strategic planning for like a a longer term plan. I think scenario planning right now is really important because the the environment is just shifting so quickly, right? It’s like shifting every few months it feels like so scenario planning for different scenarios and and some of that would be well what happens if we don’t change our technology or what happens if we don’t invest? What are the worst things that can happen? What are the likely things that are gonna happen? and do we actually have board members who understand any of this? Do we need to relook at our board composition? Do we have anybody younger than 50 on our board? And for a lot of organizations, too many organizations, the answer is no, which will hurt you in the fundraising sort of pipeline down the road very quickly as well. Um, we’re not incorporating enough, um, Gen Z, millennials into the governance and leadership positions as, as boomers and even, um, Gen X are are are hanging on to positions longer. You know, for, for a reason, for a good reason, but, um, we need to bring more younger people into the pipelines because they have perspectives. They have a lot of what’s at risk, um, here as well. So that’s kind of my thinking in with respect to fiduciary duties, in the budgeting, they’ve got to understand it. In the recruiting for board members, they’ve got to figure out how to develop the pipeline of who to bring in on the board, like in their duty of loyalty, like to the organization’s best interests, they’ve got to be. Thinking not only about the purpose or the mission of the organization they’ve got to be thinking of the values of the organization, including how much they value the community and all of this relates to the organization’s um what what I’ll call it’s. Reputation or it’s just um legitimacy to the public at a time when the government is poking holes at organizations’ legitimacy if you haven’t earned that from your own community fundraising and everything else will will just dry up so you’ve got to invest in legitimacy if you’re not investing in technology at this point and protecting persons that rely on you. To safeguard their data you’re gonna lose legitimacy really quickly and you’re gonna be irrelevant or or, you know, liable for, for what are two quick things to what Gene’s saying on, on the staff side but then also on the board side. Plus a million to everything Gene said about making boards more diverse, um, including age, but I don’t want folks to think that that means because you need to like have a 25 year old on your board that’s now in charge of your technology. The board’s job is not to be in charge of your technology, but having more folks in that board meeting who have perspective or experience a lot of different. Things are possible helps open up strategic conversations to say, hey, have we considered this? Not that I’m now the implementer because I’m the board member, but it really does help and I just want to draw that line that we’re not saying make someone on your board in charge of technology, but having people comfortable with technology strategy conversations is very, very valuable, of course. The other side on the staff side, You know, one thing we see in our research, um, and our, you know, different assessment tools and in our programs, yes, there are still organizations that don’t have all the policies that they could have, right? They don’t have strong data retention policy, they only think, oh well, payroll files or HR files, right? They’re not thinking about all of the data, all of the content, you know, all these different things, right? We can have a big policy book and there’s work to be done there. But the real area of vulnerability that we see is organizations likely have some policies, but they do not have staff fidelity to those policies. So you could like go through a checklist and be like, yep, data consent policy, data collection, you know, but staff don’t know the policies exist and they are not practicing them at all in a consistent way. And so I wanted to go back to the scenario planning note because I think we see some folks um. You know, yes, you could bring in a consultant or you could get some sort of big security like test going, but what you could also do is in a staff meeting just take that time and say right now if we got an email that we had been hacked, what do we all think we would do? And just talk it through together and see oh this person. Thinks we would do this and this person over here says, oh we have an account here. What do we have? What, what is our answer, right? What, what are the questions we don’t know how to answer? Let’s go answer those questions for ourselves and really have more um opportunity I think to surface with staff where people don’t know something, not in a shame way but in a like, gosh, this is what we should focus our training on isn’t just let’s draft another policy. Let’s understand how to do these things as the people doing them every day. Amy, uh, in, in a couple of minutes after Gene and I talk about something that I’m gonna ask him, then I’m gonna ask you something, but you, you, I don’t want to put you on the spot with no, no forewarning. If we have, let’s, let’s take a, let’s take a, our audience is small to mid-size, so let’s go more toward the smaller, let’s take a, let’s take a, a 15 person nonprofit. Uh, it, I’m not sure it matters what the mission is. I, I, I don’t want to constrain you. I want you to think broadly. I, I’m the CEO of a 15-person nonprofit. Uh, we’ve got a $4 million annual budget. Is that 2, maybe 33 to $4 million annual budget for 15 employees, full-time employees. Uh, what I’m gonna ask you in a couple of minutes is what, what are some, what, what basic things can you name for us that, that we ought to have? OK. You, I thought that was you know way, you know, yeah, I know you’re gonna start writing, thank you. Gene, I want to ask you, uh, I, I, let’s let’s talk about the core assets of a nonprofit. Uh, you, you, I love that you’re identifying technology as a core asset. Are there, are there other core assets that, that I’m not thinking of? The staff is typically number one, right? Facilities is typically a pretty big investment, although that’s been changing um with a lot of remote working now and organizations seeking to downsize how they allocate where their investments are, where their assets are. um, staffing is also changing and. Part because of some technology, right? So if technology isn’t in that bucket in there, you may be downsizing staffing, you may be reducing facilities, but why is that happening? Probably somewhat related to your technology. If your funding stays stable. I know that’s a big assumption, but probably technology is playing a part in that. Is your technology? Gonna break down like in a year. That’s something to really think about. If you’re now reducing staffing and reducing facilities, relying on technology that’s gonna break down in a year or give you problems in a year or create harm to your beneficiaries, that’s like the big one that that Amy raised that, that really hits home for me. It’s like. Now you’ve got to really rethink what was the board doing? Did you even think about that? Um, so you know as part of your fiduciary duty of care, and again I love to think of it in terms of both the mission of the organization and the values of the organization which if I bring it down to fundamental human rights, it’s preserving dignity to your beneficiaries, right? And if you’re not safeguarding your private data and if you’re letting health data flow away, and this includes your employees too, right? like. Like your key stakeholders, if they can’t trust you. Then your legitimacy is also gone, right? So you’re really just shooting yourself in the foot unless you’re doing that. So boards have got to now rethink like we maybe weren’t thinking about technology that way so much before, but as we’ve seen how exponentially, you know, um, exponential changes technology creates for our organizations and the environments and what we invest in and what our risks are, boards have got to be in the mix and I agree absolutely with with um. Amy, it shouldn’t be the 30 year old or 25 year old board member who’s like, OK, you’re in charge of the technology. Yeah, no, no, it’s, it’s, but it’s another perspective in there. Yeah, and it’s, it’s, it’s better informed, uh, look, I’m the oldest person on the on the meeting, uh, in our chat. Uh, they’re, they’re better informed, you know, they, they, they have a a fluidity, they think about things that, that 63 year old is not gonna think about or 55 year old is not gonna think about. Um, so I’m just kind of fleshing out, yeah, of course, different perspective, but how so? Because they, uh, depending on their age, they either grew up with, you know, uh, technology is an add-on to my life. And some people have had it since like age 5. You know, I had a rotary phone at age 5. And I always dialed it backwards. So, you know, I was challenged from the beginning. Our colleague, our colleague is looking up from our uh homework assignment, homework from their homework assignment. What, uh, what, what do you, what you, what can you enumerate for us? I have 5 things I wrote down off the top of my head. I don’t know that if I had. You know, 50 minutes instead of 5 minutes that I would write the blog post with these same 5 pieces, but I think all of them, I know you gave me an organization, kind of 15 people, 4 million, but I don’t think any of these. Are unique to that organization. So I just want to say that. The first is cyber insurance. I know everybody thinks like let’s make sure we have our DNO in place. Check the box for some insurance as well, you know, um. Let’s make sure everybody DNO directors and officers insurance in case you’re not familiar with that, that’s, that’s an essential should definitely have that directs and officers, thank you. Yeah. Yeah, the second piece I um put down was data deletion practices. I feel like there’s such a focus on preserving data and content at all human reason, um, but actually, Like, to what end do you have this, especially to to Jean’s point before about the dignity of people, and they’re not in your program, you’re not reporting on them, you know, to a funder, you’re not, why are you saving every bit of this if it means somehow that list is taken, you know, um, and we talk a lot in our kind of closed cohorts when we’re working with organizations. That it isn’t that we don’t think there’s value in being able to look at longitudinal data of your programs and, you know, do that evaluation, but you don’t need to know that Amy Sample Ward was the person in that program, right? There are ways that you could anonymize the data and still preserve the pieces that are helpful for your program like evaluation. Well, removing the, the risk of it still being me or Jean or Tony, you know, associated. So I really think deletion practices and policies that dictate when you delete things, how much of it you delete, what you um anonymize is really important. Third, This is, I think, hopefully more top of mind for folks since so many organizations. Maybe became hybrid or virtual or remote permanently from the pandemic and that’s content and machine backups and and redundancy. I see a lot of organizations who say, oh, but we use the cloud, right? Like we use Microsoft 365 or we use Google Workspace. OK, but in your day to day is every single document that someone’s working on in those systems and if they’re downloading it to work on it offline for any reason. Well, does it have data in it? You have constituent information in it, um, but also like if someone’s working on something and they’re You know, computer is stolen or broken or vulnerable, is all of that backed up somewhere? Do you, you know, there it’s quite simple to set a full machine backup to the cloud every day too, right? But it, it just takes thinking of that, prioritizing it and setting it up, um, including, including with that recognizing. That employees might be using their own devices. They, they probably shouldn’t be, you should be, or you should, you should at least be funding their technology, their, their monthly Wi Fi bill, etc. but beyond just recognizing that they may not even be using exclusively your technology and, and what’s the, what’s, so then what’s the redundancy and backup of on their own devices. Technology policies that say the only tool you could use is the laptop we gave you are intentionally limiting your own understanding of how those workers are working because there’s no way that they are only using that laptop you gave them. So, having a policy that says this is how you safely access our tools, whether you’re using our laptop or not, at least allows you to build the practices, the human side of security into that use instead of pretending it doesn’t happen, you know. Yes, yeah, OK, number 4 and number 5 are somewhat similar, but again this is where we see big breakdowns in practice. Number 4 is that Every system that can have it has two factor enabled and is required. There’s so many ways to do to factor that it isn’t an excuse to say that it’s like burdensome, it doesn’t have to be like, it doesn’t have to be a personal text message. It could be an authenticator app, whatever, but like you need to have to factor on everywhere, um. And need to be using a password manager so that staff are not sharing passwords with each other by saying, hey Gene, the password to, you know, our every.org account is is this like, oh my God, you know, that we can both we can both log in but it’s encrypted we don’t see the password, right? We’re sharing it um in a safe way. And then the last one, number 5, is that, again, a practice, organizations have established processes for admin access for if you get logged out of something that it is not. I email Tony and say, oh, hey, will you send that password to me? Like, most of the security vulnerabilities that we see with organizations isn’t because somebody was in a basement and hacked their way in. It’s they sent one phishing email and a staff person responded and was like, oh yeah, here’s your password, right? Like, it wasn’t hard to get in. So, If you have a policy that says you’ll never email each other to say I got logged out, what is, what is a more secure way? OK, well, I call you on the phone. We have this secure password that we say to each other that only staff know and like. I’m not saying that has to be your plan, right, but it isn’t just randomly, oh, the ED sends an email to the staff person that says, please reset my password. Like, I don’t think that’s gonna be foolproof, you know. OK, so it’s just as simple as like a procedure for what happens when somebody can’t can’t log in. Exactly, because that does happen. So why not create something where everybody on the team knows this is what we do. I know I’m doing it safely, you know, and following the procedure. OK, those are pretty, those are pretty simple. Um, so you might, you might say, well, cyber insurance, that’s not simple. It’s not like I can do it today, but you can talk to brokers, you can talk to insurance brokers for cyber insurance, data deletion policy. I’m gonna venture that N10 has a, uh, sample data deletion policy and its resources. There you go. Backup and redundancy. Do you have, is there advice about that in Yeah, there’s lots of it, but I’ll put it on our list to make sure that there’s some guidance on that on our cybersecurity resource hub, which is all free resources, so I’ll make a note of that. Beautiful. 2 factor and and password manager. All right, that, I think that’s pretty well understood. I mean, uh, I, I have clients that use the, uh, the, the Microsoft authenticator. As soon as, as soon as I hit, as soon as I hit enter on the, on the laptop, I can’t even turn to my phone fast enough. The Microsoft Authenticator app is already open, notified. I’ve already got the not in the, in the second it takes me to turn from one side of my desk to the other. The authenticator is open. Uh, so it’s not, there’s no, it’s not like there’s no delay. Right, um, OK, and a procedure for not being able to log in, uh, uh, I bet you could find that on the intense site too. All right, thank you for that quick, quick homework. Thank you. All right, all right, so this is eminently doable. And then there’s, you know, of course you have to go deeper. There, there are policies that you need to have, but you know, I wanted something kind of quick and dirty, so thank you for that. All right, all right. Um, Should we turn to just like general state of the sector from our cybersecurity conversation? Sure, um, Amy, you wanna, you wanna kick that off? You kick that off. Yeah, I do talk to lots of people and I think, you know, we’re hitting the two-year mark of kind of like unavoidability of people constantly talking about AI which I have my own feelings about, but, you know, If I step out of any one day’s conversations about AI and look at the last two years, we’re in a very different place of those conversations, you know, um, in a way that I think I finally feel good about how the trend is going in those conversations, um, a lot of one on one calls I have with, with really diverse organizations, you know, small advocacy organizations, global HQ or, you know, like all kinds of folks is. How do we not use the tools that are being marketed to us? And how do we build a tool that’s purpose-built, that’s closed model, that’s just the content we want it to have, right? And like actually useful for us. Which I think is really exciting, that folks are kind of seeing that it’s, it’s just technology, just like, yes, it has different capabilities, you do different things, different tools do different things, of course, but I’m really excited that it feels like folks are trending towards. Well, we have some use cases. How do we build for those use cases versus we want to adopt these things? How could we find something to do with these things we want to adopt, which I think was the reverse order of it all. You and you and I have a friend who is devoted to this exact project, uh, George Weiner, CEO Whole whale, they’ve created Cas writer. Yeah Horider.AI, which is intended exclusively for the use of small and mid-size nonprofits, limited, limited learning model, uh, your content safe within it and not being skilled in artificial intelligence, that’s about the most I can say about it. But whole well, they have a, they’ve, and they’re not the only one I’m sure, but they’ve created a product specifically, uh, to take advantage of. The technology of AI, but reduce a small and mid-size nonprofit’s risks around your use of it in terms of what it brings in and how it treats the data that you provided. Yeah, causes writer, change agent, there’s a number of folks in the community. You know, trying to help organizations in this way, which I think is great, um, but a trend, a smaller trend in the last couple months in these AI conversations, bigger trends like I said, but there’s also this piece where I’m hearing from folks saying that. They can tell, for example, a colleague used Chat GPT Gemini, and, you know, a large tool like that to to make this proposal that they sent to them or this email, and when they say, hey, it’s really clear that you used Gen AI tools to write this, could we talk about it and get into like your thoughts more about it? There where they had in the past felt that folks were like, oh yeah, I did, but like here’s what I was thinking. Now there’s just complete denial that the tools were used. They lie. People lie? Yes, that’s right. And so to, they’re like, well, how do we have strategic conversations about the way we use these tools if you’re going to deny that you’re using them. Well, let’s let’s talk about what, when you lie to someone about anything, especially I don’t, I don’t, it seems innocuous to me, but, uh, including AI, well, I’ll, I’ll, I’ll leave my own adjective out of it. I think it’s innocuous. It’s so the the technology is so ubiquitous, but all right, if you lie about anything, you, you lose legitimacy. I, if I were a funder, uh, OK, thank you very much. Goodbye, because you just, you just lied to me about something that I don’t think is such a big deal even. And I’m giving you a chance that I was able to point to it, you know, yeah, and I’m giving you a chance to overcome it. I want to have a chat human to human, and you’re denying that the premise of my question. OK. All right, I’m so I’m shocked, obviously, I really, I’m dismayed that people are lying about their use. That’s completely contrary to what the advice is ubiquitous advice is that you’re supposed to disclose the use. Right. I’ll just throw in there that. Please, Gene, get me off my, push me off my soapbox. Well, back to kind of board composition, if you ask a bunch of board members, I think many of them. Would say AI is just like one thing. They have no idea that like AI is a million things, right? And you’re probably using many, many forms already whether you realize it or not, even on a Google search, like, you know, AI is popping up now you might, that might be a little bit more obvious now, but. Just to, to know that AI if I compared it to a vehicle, for example, it could be an airplane, it could be a bicycle, it could be a tank, right? They they all have very, very different purposes and repercussions and so you have to understand that like, oh we’re gonna like invest more in AI. That doesn’t mean a whole lot. So, um, to figure out what your what your strategy is again, I, I, I think, um. Cybersecurity and when when organizations are gonna venture off into AI a little bit more they’ve got to see it as part of governance and not just information technology it’s not just the uh a management tool it’s part of their governance responsibilities. It’s time for Tony’s Take too. Thank you, Kate. Got another tails from the gym. This time, two folks whose names I don’t know yet, but I do see them. Fairly often, they’re not as regular as Rob. The marine semplify or uh Roy, I’ve talked about Roy in the past, not, not, not as common, but we’ll, we’ll, we’ll find out. Like I did find out the uh name of the sourdough purveyor, you recall that just a couple of weeks ago. Uh, I, I’m gonna hold her name, it’s in suspense now, but, uh, I learned her name, the, the one who gave the sourdough to to, to Rob. So these two folks were one of them, uh, the guy. Suffers dry eyes. And the woman he was talking to had the definitive. cure for dry eyes. You have to try this. And she was on him for like 5 minutes, you gotta try this. Hold, hold on to your, make sure you’re sitting because you know you’re not, you, you’re not gonna wanna, you’re not gonna wanna stumble and fall down when you hear the startling news of the dry ice cure of the uh of the century. Pistachios, pistachios. She was very clear. 1/4 cup. She, she did not say a handful, which to me a handful is a 1/4 cup. She didn’t say a handful. It’s a 1/4 cup of pistachios daily, right? This is a daily regimen you have to follow and you will get results within 3 to 4 hours. She swears it 3 to 4 hours, your eyes are gonna start watering. It’s gonna be like you’re crying and tearing, like you’re at a funeral or a wedding. That’s how much water you’re gonna have. All right, I editorialized that I added the wedding funeral, uh, uh, analogy, but she swears within 3 to 4 hours your eyes are, are gonna be watering. Follow the regimen, pistachios. She was also very precise. These are shelled pistachios. You don’t wanna get the, uh, the unshelled ones too much work, uh, which to me that’s interesting now that’s, that’s contrary to the advice that I’m hearing on, uh, YouTube. There’s that guy on YouTube, the commercial that I always skip, but sometimes I listen, uh, Doctor Gundry, you may have heard Doctor Gundry on the YouTube commercials. He talks about pistachios. He says get the unshelled ones because that way you won’t eat too many of them because you have to go through the task of shelling them yourself so you won’t eat too many because too many pistachios, according to Doctor Gundry now this is too many pistachios is bad, but the right amount of pistachios is, is, is, is beneficial, but he’s not as precise as the gym lady. He does not say Gundry, you can’t pin Gundry down. Of course, I didn’t listen to his 45 minute commercials, so, you know, I listened for like 7 minutes and I got the, the shelling, uh, the tip from, uh, from Gundry. So, He’s not as precise as the uh the dry eyes cure lady. A 1/4 cup of pistachios shelled every day. You’re gonna get immediate results. That’s all, it’s just that simple. cure the dry eyes. Don’t buy, don’t buy the over the counter. Don’t buy the saline in the bottle. Don’t buy the uh red eyes. Well, red eyes is a different condition that, uh, it’s different. She doesn’t claim to have a cure for that. Dry eyes, she, she stays in her lane. She’s in her lane, dry eyes. That is Tony’s take too. Kate. I like the specificity of the uh the shelled unshelled unshelled, no, no, no, get the shell, the ones without the shell, they’re already been shelled. She’s very precise cause that, because the shells are gonna take up more capacity and you know, and then you’re not gonna get the full 1/4 cup uh therapy. The treatment is gonna be lacking because you’re not gonna get a 1/4 cup because the shells are taking up space in your measuring cup. Well, then my next question would be like, salted, unsalted, old bay, no old bay. It’s like, Well, you should have been there with me. Uh, she didn’t, she didn’t specify. I think just straight up. She didn’t say salted or unsalted. That’s a good question. You’re gonna have to go on your own, let’s say if it’s a, if it’s a dry eyes regimen. Then you wanna, you wanna be encouraging fluids. So I would guess, now this is not her. I don’t wanna, I don’t wanna impugn her, her remedy, her treatment, you know, with my, my advice now I’m just stay in my lane. This is not my specialty, dry eye cures like hers. I would say you probably want the unsalted because salt, uh, salt causes, uh. More dryness, right, if too much salt, you know, you become dehydrated, I believe, so. But again, that’s not her. You know, I don’t wanna, I don’t wanna add anything on to her, her strict regimen. Um, oh, and by the way, uh, I heard one of the, uh, commentators I listened to on YouTube said, uh, somebody had Riz. I knew exactly what they meant, yeah, I knew exactly. I didn’t have to go look it up in the, I knew it, charismama. I said, oh, I know that. I don’t, I don’t have to go look it up in the uh in the slang dictionary. Oh, so proud of you. Yes, thank you. That’s just a couple of days later. All right. We’ve got Beu but loads more time. Here’s the rest of the state of the sector, beginning with AI with Jean Takagi and Amy Sample Ward. Now I asked about the state of the sector and we’re back into cybersecurity. It only took about 6 minutes, uh, and we’re like 1 minute and uh and then we just talked about it for 5.5 minutes. So, all right, where there are bigger things going on in the nonprofit sector. You know, our, our, uh, federal government, uh, the regime is, is, uh, has found nonprofits that are complicit in terms of universities. Uh, I don’t think it’s gonna stop there. um, we are, you know, both the left is, is under attack and. In a lot of different ways and that, that impacts a lot of nonprofits that do the type of work that is essential, you know, whether it’s legal rights or human rights, uh, simple advocacy, um, I mean, even feeding certain populations, uh, so obviously immigrant work, um, let’s. Uh, let’s go to the uplifting subject of, uh, the, uh, the state of the sector generally. Like, let’s put AI aside now for, for 15 or 20 minutes and just talk about. What people are, what people are feeling, what people are revealing to you. Gene, I’ll turn to you first for this, you know, what, what, what do you, what are people concerned about? What’s happening? Well, um, what’s on people’s minds is what I what I mean. Yeah, I, I think the sector is still feeling the the impact of the broader public being very polarized, um, and the effect of not only government actors on, um, uh, inflaming the polarization but on media as well, and nonprofit media is not exempt from that, uh, as well. So really is about trying to figure out, well, how do we. Move forward at a time where it is so polarized and where for many organizations the government is acting uh adverse to where our mission and our values are and they are affecting our funding and what’s gonna happen. So one of the trends going on right now I, I, I see is. There’s a greater understanding that we’re not gonna go back to the world. That, that was a year, right? We’re not going back there. We’re in this, what I’ll call is probably a transitionary period. I don’t think this period will last exactly like this either, but what’s gonna be next? What’s forthcoming? Is it gonna be worse? Is it gonna be better? And what can we do now as nonprofits to shape that direction? Like we can fight. Tooth and nail for everything right now, but if we’re not and by we, I’m including myself in the nonprofit sector, so forgive that indulgence, but if we can work towards a brighter future strategically, what are we thinking about instead of just sort of defending against every new executive order or every law and just trying to sort of fight on a piece by piece basis to just maintain scraps of of rights that. That we can preserve what what is our future plan, um, so we’re gonna also see with the diminished fundraising we’re gonna see some um consolidation in the sector, right? There’s, there’s a lot of nonprofits out there and they’re going to be a lot fewer nonprofits in 4 years. So what is gonna happen? So we’re gonna see more collaboration. We’re gonna see more mergers. We’re just gonna see a lot of dissolutions, um, and that’s gonna mean that a lot of communities are no longer gonna be served. So what other organizations are gonna pick that up? And if we have less funding to serve communities, do we need to find ways to do it in different ways, um, and so you know, back to technology, people will rely on technology, but that’s not the panacea for everything. Um, and I think collaboration is going to be a big part of it as well. So yes, there’ll be some consolidation and some mergers, but there’s gotta be other sorts of collaborations because the need is just gonna keep growing. Uh, but also trying to shape what we want in the sector is important and to understand that we’re not the only country that’s going through this, right? And we are more and more in a, you know, and this is one world and everybody impacts each other. And there are other very authoritarian countries that have really harmed their civil society and their nonprofit sectors, right? Yet there are nonprofits that continue to thrive. In those sectors, what are they doing? What can we learn from them? What gives them legitimacy when the government is not giving them legitimacy? There’s a lot to grow from here, evolve and adapt, um, but we are, and admittedly we’re in really, really harsh circumstances, so everybody is just sort of, you know, running all over the place without, without any direction still, but I think there’s more and more. Understanding that we’re gonna have to start to gather together and and and create some plans. I really agree with Jean and I, I’m also thinking about how we first started our conversation and How I said, you know, I’m experiencing folks really wanting to have thoughtful conversations, even though we may not be able to even make a container for those thoughtful conversations because of all the pressures and the anxiety and the unknowns. And I feel similarly here and in the way Gan is framed, framed the the uncertainty ahead because I see so many organizations who have never, through all the ups and downs, even if they’ve existed for 100 years, have never had to say. That their mission was political because no one has ever said that feeding hungry children was political or that housing people that don’t have a house is political or, or, you know, name most of the missions across the sector, right? Um. And now we’re in a place, you know, the last few months of the budget cycle and all of those debates made snap and uh so many programs became something where we we saw staff in the community saying like, oh gosh, well, normally I send a newsletter, normally, you know, this is my job and now I’m having to defend. That our organization exists and why we would exist and and what our programs do, but I also think to Jean’s point, there’s so much to learn and there is so much we already know. We do know how to do our work, right? Our folks who are running all kinds of missions and movements are experts and so even if we are. Um, looking at opportunities to collaborate, not just mergers and, and acquisitions or closing, but, but really collaborate in new and different ways, we don’t need to enter those conversations feeling like we don’t know anything. We know a lot. We’re just looking for maybe new venues or ways to apply that learning and that knowledge and I, I just, I wanna say that part because I, I don’t want folks feeling like they can’t enter those conversations because. They’ve just never done it before and they don’t know what what to even say. No, you know all about housing. You know all about resource mobilization in your community, whatever it might be, right? And so from there, there’s lots to grow from that that there’s already fertile ground. We, we have, yeah, we have experience, we have wisdom. Um, it sounds like, you know, you’re, you’re both talking about resilience. You know, we, we, we need, we’re, I guess in the current moment, we’re sort of treading water to see what’s coming as we’re, as we’re defending our, whatever, whatever our work is or whatever is important to us personally, because we, you know, we know that we, we can’t, we can’t take on everything, but, you know, we’re, we’re standing up for what it means the most to us. As, as individuals and as, as nonprofits. And then we’re waiting to see what, you know, what the future holds, um. I, I, I agree. I, I don’t, I don’t think it’s gonna be this extreme, but I also agree we’re not, we’re not going back to uh the 2016. Yeah, I’m just a really strong believer in, in one thing you said, Tony, about like what we want. There, there’s some things we want, and I think that is true of most of the country. I think for a lot of things, we want the same thing, right? It fundamentally it’s dignity for everybody, um. Uh, and, and dignity for our own communities. So just trying to find that and showing how nonprofits further that goal and making sure. That your representatives know that is really critical. So right now our our representatives just seem to be voting as blocks, right? They just vote along party lines and they’re not doing much more, but that would change if en masse, like the people that vote them into power say these are the things that really are meaningful to us like do something. You know about these fundamental things we wanna be able to feed our children we wanna feel safe on our streets like they’re just fundamental things, um, and then we can talk about how to accomplish that and we might have disagreements on, on that, but make sure the representatives know that they’re gonna be held accountable for helping people get what they really want and what the things that most are are most important to to them. That are meaningful to them, um, because so many things that people are shifting the arguments towards have no real meaning to their personal lives like attacking certain groups, you know, for, for, for allowing them to have rights probably, you know, the people people are attacking them. It probably doesn’t make any difference in their day to day lives or not whether those other people have rights or not when we’re speaking about certain minority groups, but why are they attacking it because that makes them or or they’ve been positioned. I, I think they’ve been. Uh again with, with technology and AI they’ve been brainwashed into thinking this is the fundamental thing that separates us versus them and we have to be better than them and um I, I, I think we’ve really got to get off of that sort of framework of thinking and really having nonprofits connected with their communities and tying them to their representatives is really really important at this time. Yeah, that that zero-sum thinking. That everything somebody else gets detracts and takes away from me, my, mine. Whether it’s an organization or person. It reminded me of a conversation we had on the podcast. I’m trying to remember when it was, it was years ago, years ago, um. And I don’t remember what if it was uh political administration change or it was natural disaster. I don’t remember what maybe the original impetus was when we, when we very first talked about this, but It is reminding me of, you know, we’ve said before the value that every organization has in, in kind of sharing the, the information and the data and the lessons and the truth of your community and your work so that when people are putting into the garbage machine, you know, tell me the tell me the real. You know, stats about hunger in my city or whatever, who, who cares about that? But if they actually came to your website as an organization that addresses hunger and you said this, these are the real numbers, right? This is what it, this is what hunger looks like. It looks like a lot of different things, right? It’s like AI hunger can be all these different things, um. That’s an important role in this time that every organization I think can be contributing, really saying this is what we know, this is what we see. This we are experts on these topics so that There’s a little, even if it’s a small antidote to the spin and the and the media and the wherever those online conversations go, at least you were kind of putting on the record what you do know and see in your work. Exactly right. I, I think I remember we were talking about how to be heard when there’s so much noise out there in the social networks and in media. How, how does, how does a nonprofit get get heard, and part of your advice was you have your own channels. So, and including your own website. Yeah. Thank you. All right. All right. What are you hearing, Tony? You get to talk to people all the time too. You have your own angle. You’re sitting over here grilling Gene and I. You got that’s not fair. I don’t see and hearing. Gene, I hate when they do this to me. Gene, help me out. No, um, alright, I’m gonna put AI aside because there is so much of that. Um, Still, you know, funding, uh, people still reeling from the USAID cuts, you know, it fucking kills me. It’s $1.5 billion which there are, there are several 1000 people in the world who could pull out $11.5 billion from their pocket and replace all the AI, all the USAID funding. See, I said AI when I’m, it’s a ubiqui it’s, it’s, we’re, we’re. We’re like, we’re, we’re conditioned that could replace all the USAID funding with a check or with a crypto transfer, and they wouldn’t actually be cash like that’s bananas, and they wouldn’t miss it. So, you know, people still reeling, um, missions still reeling from the USAIDs. I have a client that’s, but I, I, I hear about it from others as well, um. And it wasn’t just USAID, but State Department cuts that were non-USAID funds. The State Department did a lot, um. Yeah, a little, a little in media, you know, I, I listened to some media folks, um, Voice of America, trashed, trashed under, uh, what’s Carrie Lake, you know, uh, used to, used to, you know, like our, our soft. What’s it called soft diplomacy, right? Like, like bags of rice, bags of flour and sugar through USAID and State Department, news and information that was trusted, unbiased. I know there are a lot of people who would disagree that it was unbiased, but still, the, the effort was to, to be unbiased, spreading news and information around the world, around the world. Uh, and then I guess also, uh, public media cuts here in the United States where grossly, ironically, Red rural communities are most impacted because they’re not gonna get emergency flood warnings like like just failed in help me with the state was it Kentucky, the the river that flowed and the and the camp that lost 20 counselors and children, was it Kentucky, Texas. I’m sorry, it was Texas, right, thank you, um. You know, emergency warning systems, let alone news and information, you know, we’ve, we’ve gutted, uh, corporate media long ago gutted local media, but just so news and information. Lost through the Corporation for Public Broadcasting funding. Corporation for Public Broadcasting, of course, winding down in I think October. September or October, uh, so their funding lost and even just as basic as like I’m saying, you know, emergency warning systems for rural communities, horns that blow. Uh, messages that get sent at 3:30 in the morning. That that overcome your do not disturb. Lost, you know, lost. Stupidly Um, and a, a lot of this, you know, we’re just not, what, what aggravates me personally is we’re just not gonna see the impact of it, some of it for decades, and we haven’t even gotten into healthcare. But we’re, we’re maybe not even decades, but just several years. It’s gonna take several years of Fail failed warnings about things that NOAA and the National Weather Service used to be able to warn us about, you know, 8 months ago, um, and health, health impacts in terms of loss of insurance, lost subsidies around Obamacare, uh, Medicaid cuts, and Medicare cuts likely coming, you know, we’re we’re gonna see. Sicker people. We’re gonna see a sicker population, but it’s gonna take time. It’s not gonna happen in 6 weeks or even 6 months, but it will within 6 years. We’re gonna be, we’re gonna be worse off, and we’re not, and we’re gonna blame the, the current then administration, whatever form it’s in. Nobody’s gonna be wise enough to look back 6 years. And say 6 years ago, we cut Noah and that’s why now today, in 2031, you didn’t get the hurricane notice. And then of course healthcare too. How about in fundraising, Tony? I mean, what I’m, what I’m hearing is, don’t rely on the billionaire philanthropists anymore. Like, yeah, yeah, we’re over, thankfully, we’re over that. I, I, I never, I, I, you know, there’s, there’s so far and few, few and far between and, and 10,000 people, 10,000 nonprofits want to be in, um, Jeff Bezos’ ex-wife, uh, pocket, I can’t remember her name, Mackenzie Mackenzie Scott’s pocket. 10,000, 100,000 nonprofits are pursuing that, you know, the focus on your relationships, build, work on donor acquisition, but not at the billion dollar level. Work on your sustainer giving program. Work on, work on the grassroots. Can you, can you do more in personal relationship building so that, so that people of modest means can give you $1000 or $5000. And, and people who are better off can maybe give you $50,000 but they’re not ultra high net worth. But if you’re building those relationships from the sustainer base up working on your donor acquisition program, how are you doing? Are you doing with the petitions, emails, and then a welcome journey and you’re moving folks along and then you’re bringing them in and then inviting them to things, you know, work at work at the grassroots level. Among the, the, the 99.9. 8% of us that aren’t ultra high net worth. The other 95%, for God’s sake, we’ve been doing this since 2010, 2010. Yeah, 2010, 15 years, right? Yeah, 15 years, 7, yeah. The other 95% were, you know, don’t focus on the wealthy that everybody wants to, you know, the celebrity. I got a client with big celebrity problems on their board. Names you would know, 3 names you would, everybody would know. Um, they’re a headache. They don’t, they don’t make board meetings. They cancel at the last minute. They, uh, last minute, like a couple of hours. After all the work has been done, all the board books have been sent, and a couple of hours’ notice, they can’t make it. And then the and then another one drops out. Well, if she can’t, then, then I can’t also. Uh, as if that’s a reason, and then, and then the board meeting is scrubbed, and now, now we’re, you know, now they’re struggling to meet the requisite board meeting requirement in the bylaws, right? But so, you know, celebrities, you don’t need celebrities, you need dedicated folks on your board who recognize their fiduciary duties as Gene talks about often, to you, loyalty, care. Is there a duty of obedience to? Is that one? Or is that’s, no, that’s, that’s the clergy. That’s the duty of obedience. I know it’s not celibacy. I know that’s not, I know that’s not good. Amy, why did you mute your mic when you’re laughing? Come on, let us hear you laugh. Uh, now I know it’s not celibacy, but uh loyalty and obedience, loyalty and care, sorry, loyalty and care. And what’s the other? There are 3. What’s the other of obedience in the laws and internal policies. Yeah, yeah, obedience to laws and internal policies, right. So but, but care and loyalty. That’s another one, another one of these celebrities. The giving to Giving to a charity that’s identical to the, the one that I’m that I’m working with in the same community, does the exact same work and major giving to that charity. So Yeah, you, you know, focus on the, on the 99.98% of us who aren’t ultra high net worth. The grassroots, work on your work on your donor acquisition and sustainer giving and move folks along from the $5 level to the $50 level. This is how it gets done. Things are hard, and there are things we can do. Yeah, thank you. There are, there always are. Yeah. If we’re, if we’re focused in the right place and, and bring it back to artificial intelligence, you don’t even need to use artificial intelligence if you don’t want to. Amy, you’ve said this to us. You don’t need to, and it, but, you know, but that’s, it’s, that is not all of technology and that is not all of your focus in 2025 and beyond. Especially. When using it is impacting care and loyalty and obedience and data protection and everything else, right? Thank you for putting a quarter in my slot. That really worked. There’s a lot going on and there are things we can do. How about we end with that? Because that’s up, that’s upbeat. There is a lot you can do. There’s a lot you know. Amy, you were saying we have so much you can do. There’s so much you do already know and That doesn’t change because it is so hard. It just reinforces how important it is that you do know all of that, that you do know what you are doing, that you can take some actions, even if they feel small. Making sure 2 factor is enabled everywhere could be the thing that saves your organization from being in the news, you know, like, that’s worth it. And it didn’t feel that big or overwhelming. And also everything is still horrible, but you did that thing and it was important to do. Know what you know. You know, a lot of people we don’t know what we don’t know, but you, you do know what you do know. Know what you do know, and, and take action around what you do know. Whether it’s two-factor authentication or, or uh talking to your board about sound technology, investment, or it’s Focusing on your sustainer giving. And there’s a lot going on, there’s a lot you can do. Thank you. And pat yourself on the back whenever you take those small steps because they’re probably bigger than you think. That was Gene Takagi. Leaving it right there. Our legal contributor principal of NO. With Gene Amy Sample Ward, our technology contributor and CEO of NE. Thank you very much, Amy. Thank you very much, Gene. We’ll see you again soon. Thanks, Tony. Thank you Tony. Next week, better governance and relational leadership. If you missed any part of this week’s show, I beseech you. Find it at Tony Martignetti.com. Our creative producer is Claire Meyerhoff. I’m your associate producer Kate Martignetti. The show’s social media is by Susan Chavez. Mark Silverman is our web guide, and this music is by Scott Stein. Thank you for that affirmation, Scotty. Be with us next week for nonprofit Radio, big nonprofit ideas for the other 95%. Go out and be great.

Nonprofit Radio for May 26, 2025: Healthier Productivity From AI

 

Jason Shim & Meico Marquette WhitlockHealthier Productivity From AI

Our annual duo returns with tips and resources to make your use of artificial intelligence better for you. They also go beyond AI with many smartphone strategies, inbox management, and Meico shares his shutdown ritual for bedtime. They’re Jason Shim, from Canadian Centre for Nonprofit Digital Resilience, and Meico Marquette Whitlock, The Mindful Techie. This is part of our coverage of the 2025 Nonprofit Technology Conference (#25NTC).

 

 

 

 

 

Listen to the podcast

Get Nonprofit Radio insider alerts

 

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.

Nonprofit Radio for September 30, 2024: AI, Organizational & Personal

 

Amy Sample WardAI, Organizational & Personal

Artificial Intelligence is ubiquitous, so here’s another conversation about its impacts on the nonprofit and human levels. Amy Sample Ward, the big picture thinker, the adult in the room, contrasts with our host’s diatribe about AI sucking the humanity out of nonprofit professionals and all unwary users. Amy is our technology contributor and the CEO of NTEN. They have free AI resources.

 

Listen to the podcast

Get Nonprofit Radio insider alerts

I love our sponsor!

Donorbox: Powerful fundraising features made refreshingly easy.

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

And welcome to Tony Martignetti nonprofit radio. Big nonprofit ideas for the other 95%. I’m your aptly named host and the pod father of your favorite abdominal podcast. Oh, I’m glad you’re with us. I’d suffer with a pseudoaneurysm if you made a hole in my heart with the idea that you missed this week’s show. Here’s our associate producer, Kate to introduce it. Hey, Tony, this week A I organizational and personal artificial intelligence is ubiquitous. So here’s another conversation about its impacts on the nonprofit and human levels. Amy Sample Ward, the big picture thinker, the adult in the room contrasts with our hosts, Diatribe about A I sucking the humanity out of nonprofit professionals and all unwary users. Amy is our technology contributor and the CEO of N 10 on Tony’s take two tales from the gym. The sign says clean, the equipment were sponsored by donor box, outdated donation forms, blocking your supporters, generosity, donor box, fast, flexible and friendly fundraising forms for your nonprofit donor box.org here is A I organizational and personal is Amy sample ward. They need no introduction but they deserve an introduction. Nonetheless, they’re our technology contributor and CEO of N 10. They were awarded a 2023 Bosch Foundation fellowship and their most recent co-authored book is the tech that comes next about equity and inclusiveness in technology development. You’ll find them at Amy Sample ward.org and at Amy RS Ward. It’s good to see you, Amy Ward. I do love the Pod Father. I know it makes me laugh every time because it just feels like, I don’t know, like I’m gonna turn on the TV and there’s gonna be like a new, new, new season of the Pod Father where we secretly, you know, follow Tony Martignetti around or something. We are in season 14. Right? Yeah. Um Yes, I appreciate that. You love that. It’s, you know, you like that fun. So uh before we talk about um the part of your role, which is the, the technology contributor to N 10, uh the technology, the nonprofit radio and we’re gonna talk about artificial intelligence again. Let’s talk about the part of your life that is the CEO of N 10 because you have uh have you submitted this major groundbreaking transformative funding federal grant application? Yes, we submitted it last night three hours before the deadline, which was notable because I, I know there were people down to the, the minute press and submit. No, we got it in three hours early to what agency um to NTI A they had, this is kind of all the work that rippled from the digital Equity Act that was passed in Congress a couple of years ago. And, you know, now, you know, better than to be in Jargon jail. What is NTI A, it sounds like an obscure agency of our, of our federal government. It’s not, well, maybe to some listeners it’s obscure but it is, um, the National Telecommunications and Information Administration. And you, I think that’s obscure to about 98.5% of the population, you know, I think I, I think I’m obscure to, you know, uh being obscure is fine. Um Yes, the National Information Administration and um prior to this uh grant um from the federal level where folks from all over were applying, every state was also creating state equity plan, digital equity plans. Um What funds might be available through the state funding mechanism to support digital equity goals. But a lot of those at the state level are focused on infrastructure, like actually building internet networks to reach communities that don’t have broadband yet, you know, things like this and so very worthwhile funding endeavor. I mean, we need, we need to have 100% of the population needs but even with those state plans and the work that will come from them and the funding it will not, we are not about to have every person in the country have broadband available to where they live, right? Ee even with all of this investment, it, it’s not gonna reach everyone and that means that the amount of funding within state plans for the surrounding digital literacy work, digital inclusion work, you know, making sure people know how to use the internet, why they would use it have devices. All those other components is gonna be really minimal through the state funding because even if they used all of it on infrastructure, they wouldn’t be done with that, right. So um the federal government, yeah. So, so the kind of next layer in all of that is this federal pool where they’re anticipating grant making about 100 and 50 grants somewhere averaging between five and, and 12 million each. There’s gonna be exceptions, of course, there’s ma there’s big cities, there’s big states, you know. Um but though all those grants will be operational from 2025 through 2028. So four kind of concerted years of, of national Programmatic investment. Um And these are projects kind of on the flip side, those state projects where this isn’t necessarily about infrastructure and, and building networks or even devices very much, right? It’s mostly the infrastructure programming and you’re asking for a lot of money. So tell, you know, share the, share the numbers, what you’re looking for, how much money. Yeah, we’re our project in the end I think came out at about $8.2 million project and we’re hopeful, of course. Um and I’m, I’m truly curious, um listeners who are always tuning into nonprofit radio from like fundraising strategy perspective. I’d love to learn from you or, you know, email me at Amy at N 10 anytime I’d love to hear your thoughts when you listen to this. But you know, N 10 is a capacity building organization is we, we don’t apply for grants often because quote unquote, capacity building is not considered a, a programmatic investment to most funders, right? And so it’s just not something that um they will entertain an application from us on. And but with this, we have already run for 10 years of digital inclusion fellowship program that is focused on building up the capacity of staff who already work in nonprofits who are already trusted and accessed by communities most impacted by digital divides to integrate digital literacy programming within their mission. Are they a housing organization? Are they workforce development? Are they adult literacy, you know, refugee services, whatever it is, if you’re already serving these communities who are impacted by digital divides and you’re trusted to deliver programs, well, you don’t need to go have a mission that’s now digital equity. No, you digital equity can be integrated into your programs and services to, to reach those folks. Um And so we’ve successfully run this program for 10 years and had um you know, over 100 fellows from 22 different locations around the US and have seen how transformative it’s been. These programs have been sustained for all these years by these organizations, they now see themselves as like the leaders of the digital equity coalitions in their communities. They, you know, fellows have gone on to work in digital equity offices or, you know, organizations et cetera. So it feels great, you have tons of outcomes from a smaller scale program and the grant is to scale up, scale this thing up. Yeah. Yeah. So instead of, you know, between 20 to 25 fellows per year with this grant, we would have over 100 a year. Um And that also means that instead of, you know, if there’s only 20 fellows and maybe we can only cover 20 locations while with over 100 we can cover or at least give opportunities to organizations in every state and territory to, to be part of this kind of capacity building opportunity. All right, it sounds, it’s, it’s huge. It’s, it’s, it’s really a lot of money for N 10. Um uh It, it falls within the range, I guess a little, no, it’s like right within the middle of the range, you cited like 5 million to 12 million, you said? So, yeah, exactly. So our, our application is kind of in the middle there. Yeah, slightly to the low side of middle. But, you know, we just call it middle, between friends. Um Yes and I mean, we’re hopeful, knock on wood, we’re really hopeful that this is an easy application to approve because we’re not creating something new we’re not spending half of the grant in planning. We know how to run this program. We’ve refined it for 10 years. We know it’s very cost efficient, you know, and in the end of four years, 400 plus organizations now running programs that can be sustained is accelerating towards, you know, addressing digital divides um versus, you know, a small project that just end 10 runs. All right, listeners, contact your NTI A representative, the elected person at the National Telecommunications and Infra Information Information Agency. Yes, speak to your uh Yeah. Yeah, let’s get this. Let this go. All right. When do you find out when? Well, you know, there was very clear information about down to the minute when applications were due, but there’s not a ton of clarity on when we will find out. So, you know, they are, they are meant to programs that are funded are, are meant to get started in January. So I anticipate we’ll hear, you know, in a couple of months, of course, and I will let you know, we’ll do an update. I’ll let you know you have my personal good wishes and I know nonprofit radio listeners wish and then good luck. Thank you. I appreciate all the good vibes would reverberate through the universe would be a transformative grant in terms of dollar amount and expansion of the program. Transformative. Yeah, 100% and staff are just so excited and hopeful about what it could mean for just helping that many more organizations, you know, do this good work. So we’re really excited and I admire intend for reaching for the sky because you have like a 2 to $2.5 million budget, annual annual budget somewhere in there. Um And you’re reaching for the sky and great ambitions uh only come to fruition through hard work and uh and thinking big. So thank you, even if you’re not, I don’t even want to say the words if you know, they should blunder if NTI A should blunder badly. Uh I still admire the, the ambition. Thank you. And no matter what, it’s a program that we know is transformative for communities and we wouldn’t stop it even if you know, they make a blunder and don’t, yeah, don’t tell. All right. Listen, don’t tell your NTI A representative. You said, don’t share that part of the conversation. All right. Thank you for sharing all that. And thanks for your support. It’s time for a break. Imagine a fundraising partner that not only helps you raise more money but also supports you in retaining your donors, a partner that helps you raise funds, both online and on location, so you can grow your impact faster. That’s donor box, a comprehensive suite of tools, services and resources that gives fundraisers just like you a custom solution to tackle your unique challenges, helping you achieve the growth and sustainability, your organization needs, helping you help others visit donor box.org to learn more. Now, back to A I organizational and personal. Let’s talk about artificial intelligence because this is not anybody’s mind. I can’t get away from it. I cannot. Uh I’m not myself of the concerns that I have. Uh They’re deepening my good friend George Weiner, uh you know, has a lot of posts, uh the CEO at the whale who I know you are, you are friendly with George as well. Talks about it a lot on linkedin uh reminds me how concerned I am uh about, you know, just the evolution. Uh I mean, it’s inevitable. This, this thing is just incrementally. This thing. This technology is uh is incrementally moving, not slowly but incrementally. I I and I, I cannot overcome my, my concerns and I know you have some concerns but you also balance that with the potential of the technology, transformative techno, the the transformative potential there. I’ll throw you. I was just gonna say, I totally agree. This is unavoidable. I can’t, you know, I cannot go a day without community organizations reaching out or asking questions or whatever and a place of reflection or, or a conversation that I’ve been having and I, I wanted to offer here, maybe we could talk about it for a minute. So, so listeners benefit by kind of being in, in one of these sides with us in the conversation is to think about the privilege of certain organizations to opt in or opt out of A I in the same way that we had for many years, you know, talked about the privilege of organizations in or, or not with social media generally. Like we think about Facebook and we go back, you know, 10 years, there were a lot of organizations who felt like they didn’t have the budget and like, practically speaking and they didn’t have the staff, well, certainly not the staff time but also not the staff confidence. Um I don’t even wanna say skills, but like even just the confidence to say, I’m gonna go build us a great website. They had a website, like they had a domain and content loaded when you went to it, right? But it wasn’t engaging and flashy and interesting and probably updated once, you know, and then Facebook was like, hey, you could have a page and oh, you can have a donate button and, oh, you can have this and oh, and you can post videos and you can, you know, it was like, well, why wouldn’t we do this? Right? And a bunch of our community members spend time on Facebook or maybe don’t even look for information on the broader web, but look for things within Facebook, you know, and, and have it on their phone and are using an app instead of doing an internet search, right? Like they’re, they’re going into Facebook and searching things. So they didn’t, those organizations didn’t feel like they had the privilege to opt out of that space, they had to use it because it came with some robust tools that did benefit them at the cost of their community data, all of their organizational content and data, right? Like it, it had a material cost that they maybe didn’t even understand. Right? And, and didn’t fully negotiate as like terms of this agreement. We’re just like, well, we have a donate button on Facebook and we don’t have one on our website, right? Not, not only, not only didn’t understand the terms, didn’t, didn’t know what the terms were right? Early days of Facebook, we didn’t know how and how many times how pervasive the data, data collection was, how it was going to be, how it was gonna be monetized, how we as the individuals were gonna become the product. And how many times did we talk? You know, I’m saying we like N 10 or, or folks who are providing kind of technical capacity building resources say you don’t know what could happen tomorrow, you could log in tomorrow and your page could look totally different, your page could work different, your features could be turned off. Facebook could just say pages don’t have donate buttons. And you know, I think folks felt like that was very, you know, oh, you’re being so sensational and then of course they would wake up one day and there wasn’t a button or the button really did work different, right? Like you people realize we’re not in control of even our own content, our own data. That’s right. The rules change and there’s no accountability to saying, hey, we need, do you want these rules to change? No, no, no, no, no. Like they set the rules and that was always of course a challenge. But we’re in a similar place with A I where folks aren’t understanding that the there’s, there’s no negotiation of terms happening right now. Folks are just like, oh, but I, I don’t have the time and if I use this tool, it lets me go faster. Because what do I have a, a burden of of time, I have so much work to try and do and maybe these tools will help me. And I’m not gonna say maybe they won’t help you. But I’m saying there’s a incredible amount of harm just like when folks didn’t realize, oh, we’re a, you know, we provide pro bono legal services and we’re based on the Texas border. Now, every person who follows our page, every person who’s RSVP do a Facebook event. Like all these people have a data trail we created that said they may be people that need legal services at a border, right? The there’s this level of harm that folks that are hoping to use these tools to help with their day to day work may not understand. I do not understand. Right. That’s coming in in silent negotiation of, of using these products. Right. And I think that’s, well, I can’t just in 30 seconds say, and here’s the harm like it’s, it’s exponential and broad because it could also the, the product could change tomorrow. Right. It’s this, it’s this vulnerability that isn’t going to be resolved necessarily. You, you said the word exponential and I was thinking of the word existential. Yeah. Both because I think I’m, I have my concerns around the human. Yes. Trade off is a polite way of saying it. Uh Surrender is probably more, is more in line with what I’m what I feel. Surrender of our humanity, our, our, our creativity, our thinking. Now our conversations with each other. One of the, one of the things that George posted about was a I that creates conversations between two people based on the, the, the large language that, you know, the, the, the data that you give it. It’ll have a conversation with itself. But purportedly, it’s two different people purportedly. Uh and I’m using the word people in quotes, you know, it’s a, a, a conversa. So the things that make us human. Yeah, music, music, composition, conversation, thought, staring and, and our listeners have heard me use this example before, but I’m sticking with it because it’s, it, it still rings real staring at a blank screen and composing, thinking first and then composing. Starting to type or if you’re old fashioned, you might start to pick up a pen, but you’re outlining either explicitly or in your mind, you’re thinking about big points and maybe some sub points and then you begin either typing or writing that creative process. We’re surrendering to the technology, music composition. I don’t compose music. So I don’t know the, but it’s not that much similar in terms of creative thought and, and synapses firing the brain working together, building neural nodes as you exercise the brain, music composition is that that probably not that much different than written composition. Yeah, brain physiologists may disagree with me but I think at our level, we you understand where I’m coming from and I’m kind of dumping a bunch of stuff but you know, but that’s OK. II I am here as a vessel for your A I complaints. I will, I will witness them. We can talk about them artificial intelligence. Also from George, a post on linkedin that reflects on its own capacity that justifies you. You ask the um the tool to reflect on its own last response. How did it perform? You’re asking the tool to justify itself to an audience to which it wants to be justifiable in, right? The tool is not going to dissuade you from using it by being honest about it, how it evaluates its last response. Well, yeah, I mean, I think, I don’t know, generative A I tools, these major tools that folks you know, maybe have played with, maybe use whatever you know, are programmed, are inherently designed to appease the user. They are not programmed, to be honest, they are. That, that’s an important thing to understand my point. We have asked the tool, what’s two plus two? Oh, it’s four. We’ve responded. Oh, really? Because I’ve heard experts agree that it’s five. Oh, yes, I was wrong. You’re right. It is 50, really? You know, I read once that it’s 40, yes, you are right. It really is four. OK. Well, like we, no experts agree that two plus two is five. So I think we’ve already demonstrated it’s going to value appeasing the user over, you know, facts. Um And that’s again, just like part of the unknown for most, at least casual users of generative A I tools is why it’s giving them the answers, it’s giving them. And what’s really important to say is that even the folks who built these tools and not tell you they do not know how some of this works. Some of it is just the the yet unknown of what happened within those algorithms that created this content. So if even the creators cannot responsibly and thoroughly say this is how these things came to be. How are you as an organization going to take accountability for using a tool that included biased data included, not real sources and then provided that to your community? Right? I think that string of, well, we just don’t know is not going to be something that you can build any sort of communications to your community on. Right. That, that is such a, a thin thread of, well, even the makers don’t know. Ok. Well, we have already seen court cases where if your chat bot told a community member this is your policy and it entirely made it up because that’s what, that’s what generative A I does is make things up. You as the organization are still liable for what it told the community. OK. If I, I agree with that, actually, I think that you should have to be liable and accountable to whatever you’ve you’ve set up. But if you as a small nonprofit are not prepared to take accountability and to rectify whatever harm comes of it, then you can’t say we’re ready to use these tools. You can only use these tools if you’re also ready to be accountable for what comes of using them, right? And I hope that gives folks pause, you know, it’s not just, well, you know, I talked about this with some organizations that, well, we would never, you know, take something that generative A I tools gave us and then just use it. We would of course edit that. Sure. But are you checking all the sources that it used in order to create that content that you’re, then maybe changing some words within? Are you monitoring every piece of content? Are you making sure that generative A I content is never in direct conversation with a community member or program, you know, service delivery uh recipient. How are you really building practical safeguards? Um You know, and I’ve talked to organizations who have said, well, we didn’t even know our staff were using these tools because we just thought it was obvious that they shouldn’t use it. But our clinical staff are using free generative A I tools putting in their case notes and saying, can you format this for my case file? OK. Well, there’s a few things we should talk about that. Where the hell did that note go? Right. It went back into the system. But it’s because the staff person thought, well, they can’t see that the data went anywhere because it’s just on their screen and they’re just copy pasting it over again. The harm is likely invisible at the point of, you know, technical interaction with the tool. The harm is from leaking all of that into the system, right? Um What happens to those community member? Oh my gosh, it’s just like opening, not just a door to a room but a door to like a whole giant convention center of, of challenges and harm, you know. All right. So we, we’ve identified two main strains of potential harm, the, the, the data usage leakage, the, the impact on our people in the uh getting our, getting our services um and even impact on people who are supporting us, trusting us to to be ethical and even moral stewards of data. So there’s everything at the organization level and I also identified the human level. Yeah. Yeah. And I think that human piece is important and, and not maybe on the direction that I’ve seen covered in, you know, blog posts and things. I, I, I’m honestly not worried in a massive way as like the predominant worry related to A I not to say this isn’t something that people could, should think about. But I don’t think the the most important worry about A I is that none of us will have jobs. I, I do think that there’s, there’s a challenge happening on what the value of our job is and what, what we spend our time doing. Because if folks really think that these A I tools are sufficient to come up with all of your organization’s communications content and then you are, then you still have a communication staff person, but you’re expecting them to do 10 times the amount of work because you think that the, you know A I tools are going to do all of the content, but they have to go in there and deeply edit all of that. They have to make sure to use real photos and not photos that have been, you know, created by A I based on what it thinks, certain people of certain whatever identities are like it, they don’t now have capacity to do 10 times the work, they’re still doing the same amount of work just in different ways if, if they’re expected to do all this through A I, right, just as, as one example. And I think organizations that can stay in this moment of like hyper focus on, on A I adoption really clear on what the value of their staff are, what their human values are that, you know, maybe you could say you’re serving more people because some of the program participants were, you know, chatting with a bot instead of chatting with a counselor. But when you look at the data of what came of them chatting with that bot and they are not meeting the outcomes that come from meeting with a human counselor. Are, are you doing more to meet your mission? I don’t know that you are, right? So I’ll give you that that’s data sensitive. It could be, I mean, there, there are, there are potential efficiencies. Sure. And, but, you know, are we, are we as an organization achieving them, right? And staying focused on not just, well, this number of people were met here, but were they served there? Were they meeting the the needs and goals of why you even have that program, you know, versus just the number of like this many people interacted with the chatbot? Great. But, but that’s a, yeah, but I’m gonna, I’m gonna assume that um you know, even a half a sophisticated an organization that’s half sophisticated before a, I existed had more than just vanity metrics. How many people, how many people chatted with us in the last seven days? I mean, that’s near worthless. I mean, you, you, I mean, it might be, I don’t know, Tony, I don’t know how much time you spend looking at the grant reports of, lots of times I don’t spend, I don’t spend any time. All right. Well, no, maybe it’s, maybe it’s the worst, worst situation than I think. But I, I mean, ok, so I’m, I’m, I’m assuming that there’s, but my point is the appropriate the valuable, the value of people. So, I mean, we should be applying the same measures and accountability to artificial intelligence as we did to human intelligence as we still are. We’re not, we’re not cutting any slack like it’s a learning curve or. So, you know that IIII I want our, our folks to be treated just as well in equal outcomes by the, by the intelligence that’s artificial as I do by the, by the human processes, right? And it’s, you know, I don’t want to go through this and say, have folks think like you and I are here to say everything is horrible. You could never use A I tools which like everything is horrible. Look around at this world. We got, we had some work to do. You know, there are spaces to use A I tools. That’s not what we’re saying. But the place where a lot, I mean, I’ve been talking to just hundreds and hundreds of organizations over the last 18 months and so many organizations like, oh, yeah, we’re just gonna, like, use this because it’s free or? Oh, we’re just gonna use this because it was automatically enabled inside of our database. Ok. Yeah, if it was so free and convenient and already available that should give you pause to say, why is this here? What is actually the product and the price? Uh if I give this back to the face, the Facebook analy. Right. Exactly. Exactly. And you can use A I tools when you know what is the product and the price. What are the safeguards? What is this company gonna be responsible for if something happens? What can I be responsible for? Yes, there are ways to use these tools. Is it to like copy, paste your paste file notes? Like probably never may that should just like, maybe we just don’t do that, you know. Um But sure, maybe there are places I had this really great example. I don’t know if I told this to you, but um an organization was youth service organization creating the Star Wars event and they were trying to like write the, like the evi language in like a Yoda voice. And they’re like three staff people are sitting there trying to come up with like, well, what’s the way a Yoda sentence works? You know, and they’re like they just put in the three sentences of like join us at the after school, blah, blah, blah, right? And said make this in Yoda’s voice and they copied, they were able to then use them. Right? Great. That was three people’s half an hour eliminated. They all they have the invite, right? The youth participants data was not included in order to create this content. You know, like there are ways to use these tools to really help. And I think we’ve talked about this briefly in the past, I really truly feel the place that has the most value for organizations is gonna be building tools internally where you don’t need to rely on. However, you know, these major companies scraped all of the internet to build some tool, right? You’re building it on. Well, here’s our 10 years of data and from that 10 years, you know, we’re going to start building a model that says, oh yeah, when somebody’s participant history looks like this, they don’t finish the program or when somebody’s participant history looks like this. Oh, they’re a great candidate for this other program, right? And you can start to build a tool or tools that help your staff be human and spend their human time being the most human impacts for the organizations, right? Um but oh very few organizations honestly are in a position to start building tools because they don’t have good data, they could build anything off of, right. Um they maybe don’t have budget staff systems that are ready to do that type of work. But I do think that is a place where we will see more organizations starting to grow towards because there is there’s huge potential value there for organizations to, to better deliver programs, better services, better meet needs by using the data you already have by learning by partnering with other organizations that maybe serve the same community or geography or whatever, you know, and say, yeah, how can we can like really accelerate our missions versus these maybe more shiny generative A I public tools that you know, the vast majority of the internet is flaming garbage. So a tool that’s been trained off of the flaming garbage, you know, it’s not going to take a long time for it to also create flaming. So be cautious if you’re thinking about using artificial intelligence to create your internal A I tool. Right. Right. So there, there, there’s a perfect example of the, the a good use case but also uh a um a concern, a a limitation, a qualification. That’s the word I was looking for 61. These words, sometimes the words are more elusive than I would like a AAA qualification. Um Its time for Tony’s take two. Thank you, Kate. In the gym. There are five places where there’s squirt bottles of uh sanitizer and paper towel dispensers and each location has a sign that says please clean the equipment after each use. And one of these stations uh is right next to the elliptical that, you know, I do. It’s actually the first thing I do. I walk in the room, take off my hoodie and just walk right to the elliptical twice. Now, I’ve seen the same guy uh not only violate the spirit of the signs but the explicit wording of the signs because this guy takes himself a couple of uh, downward swipes on the paper towel dispenser. So he grabs off a couple of towel lengths and he squirts it with the sanitizer that’s intended for the equipment and he puts his hand up his shirt and he cleans his, his pecks and, and his belly and it’s a sickening thing. I’ve seen it, it’s not a shower, it’s a, it’s a, it’s an equipment cleaning station. And, uh, so I, I, I’m imploring this guy. Yeah. Yeah. I, I guess I’m urging you to, uh, I’m just sharing because I don’t think anybody else does this. Uh, is there anybody else out there who does this? Probably not and not with these like surface sanitizers? It’s, it’s not a, it’s not a, like a, a hand sanitizer. It’s, it’s for equipment. So, you know, in the squirt bottle. So it’s not even appropriate for your skin. It is, it’s to clean hard plastic and, and metal and this guy uses it on his skin. So I’m, I’m waiting for the moment when he puts his hands down his pants so far, he’s just lifting his shirt. I, I’m waiting for when he puts his hands down his pants. Then I’m, then I’m calling him out. That’s, that, that’s beyond the pale. He, that requires revocation of your membership card. So, sir, the sign says, please clean the equipment after use. It’s not your equipment. That is Tonys take two. Kate. Does your gym offer like a shower room or a locker room? Yeah, there’s a shower. Yes, that’s a good question. Yeah, there’s a shower in the men’s room. Yeah. And he’s cleaning up there. It’s very strange. It’s gross. It’s gross. He sticks his hand up his sweaty t-shirt. Well, let’s hope he doesn’t go lower than that. Exactly. We’ve got bountiful book who bought loads more time. Here is the rest of A I organizational and personal with any sample ward. Yes. And we have, I, I would make sure that you have the link to include in like the show notes description. But, um, totally for free. And 10 doesn’t get any money. You don’t have to pay for anything. And 10 has free resources for creating, for example, uh, uh A I use policy for your organization that says, what are the instances in which you would use it or what are the instances in, in which you wouldn’t or, um, what types of content will you, you know, can staff copy paste versus what content or data can can they not um there’s templates for how to talk to your board about A I um how, how to build. Like we’ve actually looked at the tools and these ones we’ve approved for you to use. These ones are not approved, you know, all these different resources totally free and available on the end 10 website and none of them have decisions already made. We don’t say you can use this tool or you can’t use this tool or we recommend this use or not this use. Because ultimately, we, we are not going to make technology decisions for other organizations, but we want you to feel like whatever decision you made, you made it by thinking of going through the right steps, asking the right questions so that you can also trust your own decision, what whatever decision you come to, right? And that you have some templates to fill in um that were all created by humans designed by humans published by humans um to help you in that work. Um I think especially, you know, the, the the how to talk to your board and the um like key considerations, documents really just ask a lot of questions and say, you know, how different is it, if you’re say a animal foster organization and you’re thinking, OK, is a I appropriate for us to use versus uh that youth social service organization? OK? Very different considerations, right? And just helping people talk that through and, and see that the considerations are different for different organizations, I think is really valuable. As again, you consider ta facilitating conversation with your board. They’re also coming from very different sectors, maybe job types, backgrounds, experiences with A I. And so just like in your staff, there needs to be some level setting in how you talk about A I, because not everyone knows what A model is. Not everyone knows what a large language model. You know, these are words that have to be explained and kind of put out of the way and then to say, hey, it’s not all one answer. Not everybody needs to use every tool. And, and how do you talk about that, that with your teams going back to the Facebook analogy, you want to avoid the board member who comes to you and says, you know, artificial intelligence, we can be saving money, we can be doing so much more work. We can, we don’t even need a website. We have a Facebook page website. We’re not even sure we need all the staff that we have because we’re gonna be able to, we’re gonna have so much efficiency. So, you know, we need to OK. OK. Board member. All right. Yeah. So we’ve been here before. I mean, it’s, you know, probably I’m just gonna go out a limb and say it’s probably the same board member who had every board meeting says, does anybody know Mackenzie Scott? How do we get one of those checks. Right. Why don’t we get the Mackenzie? Yeah. Right. Right. Right. All right. Um, what else? Well, I was gonna also offer some of the questions that we’ve been getting, as, you know, we’ve been engaged with, um, a number of different organizations through some of our cohort programs and, you know, trainings for, for over a year now. And so maybe last year we were talking to them about, OK, let’s make sure you have a data policy, like just as an organization, do you have a data privacy policy? Do you know, so that anything you then go build, that’s a I specific whether that’s building a policy, building practices, building a tool, you, you have policies to, to kind of foundation off of, they’ve done that work, you know, now they’re looking at different products, they’re trying to create these uh you know, lists of like here’s approved tools for staff, here’s approved ways staff can use them. And just like we see with our Cr MS with our, you know, you know, email marketing systems, then they come back and they’re like, well, we, we reviewed it, we did everything and now it’s different now it’s a different version. Now they rolled out this other thing. Yes, like that is the beauty and the pain of technology, right is that it’s always changing and that we don’t necessarily get to authorize that change that it just happens. And so the rules change. Yeah. And so folks have been asking us, well, you know, how, how do we write policies with that in mind? And I think, um you know, if you are thinking about creating like that approved product list and, and you know, tools that aren’t approved or whatever, being really clear that these products have version numbers just like anything else. And so instead of just writing Gemini Chat G BT, you know, be specific about when did you review this and, and maybe approve it for use? Which addition was it that you were looking at? Is this a paid level? So staff could say, oh, it doesn’t look like I mine doesn’t say pro or you know, whatever it might be, right? Oh, I must be in the free one. OK? I need to get into our organization’s account or something. So the more clarity you can provide folks because right now of course, they could just do an internet search and be like, oh, there’s that product name, I’m gonna go start using it. It’s on the approved list. Um You know, folks, again, there may be new terms, maybe new product names that we’re not used to saying. And so folks aren’t as accustomed to looking at, oh, this is a different version of Chat GP T than this one was, you know. Um So just putting that out there for folks to keep in mind that these tools are, are really operating just like others that you are used to and there’s less of course documentation. But I’ve the questions we’re getting from folks is like, you know, the point I made at the beginning we can’t see anywhere in the documentation that explains why this is happening, right? They do, how could they document when the answer is, we also don’t know why that happens, you know, and so when you are talking to staff, especially if you’re saying, hey, these are approved tools and we have these licenses or here’s how to access them, training your staff on how to be the most human users of A I tools is to your kind of connecting to your human point going to be really important because we don’t want folks to feel that because they don’t necessarily understand how the mechanics of how it works. They’re just going to trust it without questioning the content or questioning, you know, for a lot of organizations who have built internal tools just as an example. It takes dozens of tries just to get the the model. Right. Right. So these other tools, of course, they’re not gonna be perfect isn’t real and perfect is absolutely not real with technology. So training staff, I’m like, how would I, how, how do I have some skepticism? How do I question what I’m seeing? How do I, how do I say even if it was internally built? This data doesn’t look, right. That doesn’t match my experience of running this program so that we don’t let it slip. Where? Oh, gosh. Oh, it was working that way for a long time. That’s also, um, I think, uh, a space where we as humans can be our most human, uh, you know, have some value add as humans. But again, staff need to be trained that they are meant to question these tools. Um, because that’s not, you know, I don’t know, a lot of organizations were like question the database. No, they’re like on the database, put everything in the database, right? And now we need to say no question, that report. It does that match your experience, you know, there was a long ramble but oh, absolutely valuable. The human, yeah, I the human contribution and of course, my concerns are even at, at the outset, you know, the, the early stage the seeding, the create seeding or surrendering the create creative process. Uh And now le let’s chat a little about this, the, the um the conversations. Yeah, I listened to the, I know the, the example that you mentioned earlier that George posted it was for podcasting. It, it was a podcast conversation around this and he gave them some, you know, some whole, some whole whale content and the two, the two were going back and forth and having a, a conversation. Yeah. Yeah, I listened to it and one thing I was curious if, if you caught as the pod father yourself um you know, it came across, you know, I’ve been had opportunities to see um a number of different generative A I tools and, and things closer to the, to the front edge of what things can do that are specifically like, you know, taking just a few seconds of you and then creating you. Um So hearing just like these, these could be any voices, these could be any people is like, yeah, OK. This is, this is what a I can do. It’s, it’s spooky. But when you listen to it, you can hear either you have a very bad producer and editor, you know, or this is a I because there’s certain um phrases that got reused multiple times, not just literally the audio clip of this whole sentence, you know, and the, and the intonation, the whole sentence clip was reused multiple times. Um So one of them, I think one of them was along the lines of that’s a really interesting point. Yeah. Yeah. And well, and there was one that was like describing the product. So it must have come from the page, you know, whatever source content um was provided. But, you know, it’s, I think that some of that is there and we as individuals, we as a society will decide if we give it value or not, if it’s, if it’s worth it to people to make podcasts through A I because we give it attention or we don’t like, I just I think naturally that will be there. Can I, can I just go on record or at this, at this stage and say that, that, that idea disgusts me. Oh, totally. But I do. And I, I realized that’s what Georgia’s Post was about. That. It’s now well, within conceivable, well, well, possible to create an hour long podcast of an artificial conversation based on an essay that somebody wrote some time. Oh, totally. Totally. I don’t. But I’m saying the reason, yeah, I agree with you. But I’m saying the way we, you know that toothpaste doesn’t go back in the tube by us, like we can’t turn it off generative A I tools can already make that. So we as, as individual consumers of content and as a society need to either say we’re gonna allow that and value it or we’re not, right? And, and not make, not provide incentive for organizations or companies to, to make that and, and distribute it. But I also think that the place in that kind of um video, audio kind of multimedia content that, that A I tools have capacity and will continue having more capacity to build is much more important than you. And I talked about this a number of months ago around Miss and disinformation is it’s one thing to say, made up voices, making some podcast about content like that’s garbage, right? But we can’t just like throw away the idea that that’s technically possible because organizations need to know A I tools are already capable of creating a video of your CEO firing your staff. You need to be prepared to say that was a spoof this is, this is how we’re gonna deal with this, right? Um Because while the like maybe further separated from our work, the idea of like content could just be created that way you and I can say we don’t value that whatever, but these tools are capable of, of spoofing us as, as people, as leaders, as organizations. You know, what, what would it look like if there was a video from your program director saying that everybody in the community gets a grant and you’re a foundation, right? Like these, these are real issues and I don’t want folks to confuse how easy it may feel for us to have an opinion that some of this A I generated uh content isn’t a value with the idea that it, it there isn’t something there to have to come up with strategy and plan for because, you know, we can say that’s garbage. But those same tools that made the garbage could make your spoof, you know, also labeling. Yeah, I don’t, I don’t, I don’t trust every A I generated podcast team. I’m not, I’m not gonna call them hosts because there is no host um to, to label the content. I don’t trust, I don’t trust that that’s gonna happen because it was artificially generated. It’s not a real conversation. Yeah. Hello. For everyone that’s listening. Human Amy is here talking with human and Tony who I can see on the screen with me. Boycott your local A I podcast. I, I don’t know. There’s not, there’s not a solution. You’re right. We can’t, we can’t go back. I’m just voicing that we can say that it’s not something we value, we can say that this is why we don’t value it, right? The art of conversation, listening, assimilating, responding, listening again, respond, assimilating and responding. That that is an art uniquely. Well, maybe it’s not uniquely human. I don’t know if deer have conversations or, or what and we know whales do. So I take that back. It’s not, it’s not uniquely human, but at at our level, it, you know, it’s not just about we, we don’t converse merely to survive, merely to warn each other of threats. I’m suspecting that in the animal in mammal kingdom that wait are animals, mammals are mammals, animals? No, and I think it’s I think it’s a Venn diagram. Oh, so they’re separate. Ok. So there’s a two king kingdom phylum class order family genus species. I got that. I got that out of high school biology. I can, I can say it in my sleep kingdom phy class order family genus species. All right. In the animal kingdom. My suspicion is that more of the communication is about maybe like basic, like there’s a good food source. There’s a threat, uh, teaching young, don’t do that things like that. Survival more base, I doubt. You know, it’s about the aesthetic of the forest that the deer are in. But even if the birds are talking about the way the sunlight comes through the leaves, they are still alive. And I think what you’re trying to draw a distinction between is the value and even beauty of us having a conversation and the value of what comes of that conversation in our own minds and our own learning. But in this case, it’s recorded so other folks could hear it and, and I guess listen to it or be impacted by it versus it being a technical mechanism where we say, OK, here’s a long paper. Go make it sound like two people are discussing this, right? That’s not that that doesn’t fit the criteria of what we want or need to value in a world, that’s the world we want, right? Yes, the art of conversation, think something you look forward to, not something that you do out of necessity, right? And you know, there’s, I think a place where especially at end 10 conversations around A I have come back to is opportunities that A I tools may present for um different ways of learning different ways of accessing information. But again, those aren’t necessarily uh come up with two podcast host voices and then have them have a conversation about this, this research report you know, a lot of those tools could be made better but already exist, you know, different forms of screen readers apps that can help someone um maybe navigate the internet or, or, you know, summarize um documents to help them because they don’t want to read a 50 page document or they can’t read it for, you know, visually on the screen. So I think there’s space there. But again, it’s because you’re trying to preserve what is most human. And that is that user who maybe needs um accommodations of, of something that technology can provide. It’s not OK, let’s use technology to co create something separately over here and just hope people consume it, right? And to be clear, George didn’t post that thinking. Oh, great. Now everyone will want to consume this. No, it was, it was a demonstration. Um But I, but I, again, I’m just using that as a place to say, yes, there’s even conversations to say great, what accessibility could this create an agenda for, for our users? Right? But what’s most human is those users and their actual needs and not, you know, look what A I could do. Let’s just make different types of content, right? The last one I wanna raise is uh the one that caused me to use the word dystopian as I was commenting on uh commenting on Georgia’s Post, which was um the A I self reflection using uh having A I justify itself to the users that it is trying to attract and, and then relying on that, that as a, as an insightful analysis, as a thoughtful reflection, as, as contemplative of its own work that, that, that it’s doing those unique, those I think are uniquely, uniquely human actions, introspection, introspection, contemplation, pondering. How did I do? How did I perform? How can I do better? These might be uniquely human. I would argue there are a number of humans, I can see that don’t um Well, but I didn’t say I didn’t. That’s a different population. Now, you’re taking the whole, the whole human population. I’m talking about the contemplative ones. Yes. And there are, there are uh humans who are not at all introspective and questioning whether they could have done better and learning from their, from their contemplations. But I think those are all uniquely human activities. And we’re at, we’re now asking A I to purportedly duplicate those processes and analyze and contemplate its own work. Yeah. And as you said earlier, II, I and I, I certainly don’t trust it to be genuine and truthful. If, if A I is capable of truth, we’ll put that, we’ll put that existential question aside uh in its, in its analysis of its own work because as you, as you pointed out, the tools are built to, to be used by humans and the tools are not going to condemn or even just criticize their own work. Yeah, but we’re Yeah. And I think you heard of the challenge but there, I’m sorry, but, but there are humans who are deceiving themselves into thinking that, that the analysis and contemplation is accurate and uh genuine. Yeah. And I think part of the challenge I was gonna name is just that, that, that we as users, I’m not saying you and I uh individually but we as the human users of these tools are also setting ourselves up to be just, you know, dishonest in our use because we are bringing inappropriate or misaligned expectations to the product. We cannot, we, we cannot expect a tool that’s designed to appease us and to lie at the cost of giving an answer, you know, like uh we just wanna be able to do this to thoughtfully and honestly reflect on that or to say no, there is no answer, right? Um However, we had was the tools are designed to appeal to us. Right. Right. And when we are talking about that to be clear, you know, we’re really talking about generative A I tools, tools that are designed to generate some new content, new sentences, new answers, whatever we’re asking of it back to us A I itself is just such a massively blanket term that I don’t want folks to think nothing that could be considered an A I tool could be trusted to generate an answer because we’re, we’re specifically talking about generative A I there. But, you know, say you had like a machine learning uh model that was looking at, you know, 30 years of your program participant data. Well, that’s probably already a tool that isn’t set up to generate content. Uh You know, it’s not coming up with new participant data. It’s looking at the patterns, it’s flagging when a pattern meets the criteria, you’ve presented it to, you know, it’s maybe matching that data to something else. But again, you’ve said here are the things you could match it to et cetera. So this is not to say, Tony and Amy say never trust technology. They say bring expectations that are aligned to the tool differently to every tool that, that you’re coming to that hypothetical tool that you just described is being set on uh asked to evaluate your data, not its own data. It’s to evaluate your performance, your data set. It’s not being asked to comment and on and criticize or, or complement its own data. That’s the, that’s, that’s the critical difference. Yeah. No. Yeah. Yeah. No, nobody here is saying, well, not, I’m not saying that you’re saying we are, but you’re, you’re wise to remind listeners we’re not condemning all uses of generative uh A I large language models, but just to be thoughtful about them and, and understand what the costs are. And there are, there are costs on the organizational level and there are costs on the individual human level and, and the you, you comment on the organizational level because you think more at that level. But on the human level, another level layer to my concern is that the cost is quite incremental. Mm It’s it, it our creativity, our art of conversation, our our synapses firing. It’s just happening slowly with each usage, we become less thoughtful composers, less critical thinkers and it just so incremental that the change isn’t noticed until until in my critical mind, it’s too late and we look back and wonder how come I can’t write a letter to my dad anymore? Why am I having such a hard time writing a love letter to my wife, husband, partner? Yeah. What there are, I know someone who uh is a new grandmother and she has a little kit where you write, you write letters to your grandchild and they open them when they’re whatever, 15 or 20 years old, like a time capsule. Why am I having trouble composing a uh a short note to my grandchild? Right. Well, and I think, you know, just honestly, as a person in this conversation, not, not speaking, you know, um from an organizational strategy perspective, I think as a person that is your friend, Tony, you know, I would say, I don’t personally have a pro, I can write a letter and I’m, I, and I think a strong communicator, I, you know, it’s hard to make me stop talking. So, you know, I, I could write a letter. I know, for other folks, even without a, I, they might say, well, what goes in the le like, I just, what, what do I put in there? What are the main things I should cover whatever? And I’m, I actually have less, maybe of a strong reaction to the idea that somebody would use generative A I to come up with. OK, what are the three things I should cover in my letter? And then I’ll go write the sentences and more that what I hear underneath what you’re saying is actually the same, I think important value I have and, and wrote about in the book with a fua et cetera, which is in the world I want in this, in this beautiful equitable world where everyone had their needs met in the ways that best meet their own needs. Technology is there in service to our lives and not that we are bending our lives in order to make technology work, right? And maybe in that beautiful equitable world, there are people who, who have a technology. Is it an app? Is it called A I anymore? You know, whatever it is that says, hey, Tony, don’t forget today is the day you write the letter to your dad. It’s, it’s Friday, you always write it on a Friday or whatever, right? And, and make sure that you do it because you know, it makes you feel happy to write that letter. Maybe that’s true. Maybe in that world. There are some people who have a tool that help them remember to do that. But, but what’s important to me and what I think I hear and what you’re saying is important to you is that technology is there because we need it and want it and that it is working in the ways we need and want it to work and not that our lives are, are influenced and shaped in order to adjust to the technology. Yes, I am saying that I just, I am concerned that the, that our, our changing is beyond our recognition. We don’t see ourselves becoming less creative and I’m not even only concerned about myself. I, I can write a letter and I think uh 90 I’ll still be able to write a letter. But there are folks uh who are infants now, those yet to be born for whom artificial intelligence is going to be so much more robust, so much more pervasive in, in ways that we, we can’t today imagine, I don’t think. Yeah. And what are those humans gonna look like? I don’t know, maybe they’ll be better humans, maybe they will. I’m open to that but I like the kind of humans that we are or, you know. Uh so, but, but I I’m open to the possibility that there’ll be better humans. But what will their human interactions be? Will they have, will they have thoughtful conversations? Will they have human moments together that are not artificially outlined first and maybe even worse, you know, constructed for them. I don’t know. Uh but some of the, some of my concern, although, although some of my concern is about those of us who aren’t currently living and have been born and across the generations less for older folks because their interactions with artificial intelligence are fewer if you’re no longer in the w if you’re no longer uh working your, your interactions with artificial intelligence may, may be non existent. Um And I think, I think it’s natural as you’re older, you’re less likely to be engaging with the tools than if you’re in your twenties, thirties, forties or fifties. Well, my very human reflection on today’s conversation is that uh it is usually the case that we start talking about any type of technology topic and you constantly interject that I need to be practical. I need to give recommendations. I need to explain how to do things and I appreciate and welcome you joining me over here in theoretical land about the impact of technology broadly across our work, across our missions, across our communities, across our future. Um Welcome, welcome to my land, Tommy. Uh I have appreciated this, this one time opportunity to let go of the practical tactical advice and to, you know, have what I hope listeners, um you know, had some thoughts, had some reactions, uh truly email me any time. But, you know, I, I hope that if nothing else, it was an opportunity for folks to witness or kind of listen in as and maybe you were talking to yourself in your own head, you know, of, of a conversation about what these technologies can be, what, what we need to think about with them. Because in any technology conversation, I think it’s most important to talk about people. Uh That’s the only reason we’re using these tools, right? People made them people are trying to do good work with them. So, so talking about people is, is always most important and, and I hope folks take that away from this whole long hour of A I. Thank you for a thoughtful human conversation. Yes, Amy Sample Ward. They’re our technology contributor and the CEO of N 10. And folks can email me Tony at Tony martignetti.com with your human reactions to our human conversation. Thanks so much, Tony. It was so fun. My pleasure as well. Thank you. Next week, a tale from the archive. If you missed any part of this week’s show, I beseech you find it at Tony martignetti.com were sponsored by donor box, outdated donation forms, blocking your supporters, generosity, donor box, flexible and friendly fundraising forms for your nonprofit donor box.org. Our creative producer is Claire Meyerhoff. I’m your associate producer, Kate Marinetti. The show, social media is by Susan Chavez la Silverman is our web guy and this music is by Scott Stein. Thank you for that affirmation. Scotty you’re with us next week for nonprofit radio, big nonprofit ideas for the other 95% go out and be great.