Tag Archives: ai conversations

Nonprofit Radio for September 30, 2024: AI, Organizational & Personal

 

Amy Sample WardAI, Organizational & Personal

Artificial Intelligence is ubiquitous, so here’s another conversation about its impacts on the nonprofit and human levels. Amy Sample Ward, the big picture thinker, the adult in the room, contrasts with our host’s diatribe about AI sucking the humanity out of nonprofit professionals and all unwary users. Amy is our technology contributor and the CEO of NTEN. They have free AI resources.

 

Listen to the podcast

Get Nonprofit Radio insider alerts

I love our sponsor!

Donorbox: Powerful fundraising features made refreshingly easy.

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

And welcome to Tony Martignetti nonprofit radio. Big nonprofit ideas for the other 95%. I’m your aptly named host and the pod father of your favorite abdominal podcast. Oh, I’m glad you’re with us. I’d suffer with a pseudoaneurysm if you made a hole in my heart with the idea that you missed this week’s show. Here’s our associate producer, Kate to introduce it. Hey, Tony, this week A I organizational and personal artificial intelligence is ubiquitous. So here’s another conversation about its impacts on the nonprofit and human levels. Amy Sample Ward, the big picture thinker, the adult in the room contrasts with our hosts, Diatribe about A I sucking the humanity out of nonprofit professionals and all unwary users. Amy is our technology contributor and the CEO of N 10 on Tony’s take two tales from the gym. The sign says clean, the equipment were sponsored by donor box, outdated donation forms, blocking your supporters, generosity, donor box, fast, flexible and friendly fundraising forms for your nonprofit donor box.org here is A I organizational and personal is Amy sample ward. They need no introduction but they deserve an introduction. Nonetheless, they’re our technology contributor and CEO of N 10. They were awarded a 2023 Bosch Foundation fellowship and their most recent co-authored book is the tech that comes next about equity and inclusiveness in technology development. You’ll find them at Amy Sample ward.org and at Amy RS Ward. It’s good to see you, Amy Ward. I do love the Pod Father. I know it makes me laugh every time because it just feels like, I don’t know, like I’m gonna turn on the TV and there’s gonna be like a new, new, new season of the Pod Father where we secretly, you know, follow Tony Martignetti around or something. We are in season 14. Right? Yeah. Um Yes, I appreciate that. You love that. It’s, you know, you like that fun. So uh before we talk about um the part of your role, which is the, the technology contributor to N 10, uh the technology, the nonprofit radio and we’re gonna talk about artificial intelligence again. Let’s talk about the part of your life that is the CEO of N 10 because you have uh have you submitted this major groundbreaking transformative funding federal grant application? Yes, we submitted it last night three hours before the deadline, which was notable because I, I know there were people down to the, the minute press and submit. No, we got it in three hours early to what agency um to NTI A they had, this is kind of all the work that rippled from the digital Equity Act that was passed in Congress a couple of years ago. And, you know, now, you know, better than to be in Jargon jail. What is NTI A, it sounds like an obscure agency of our, of our federal government. It’s not, well, maybe to some listeners it’s obscure but it is, um, the National Telecommunications and Information Administration. And you, I think that’s obscure to about 98.5% of the population, you know, I think I, I think I’m obscure to, you know, uh being obscure is fine. Um Yes, the National Information Administration and um prior to this uh grant um from the federal level where folks from all over were applying, every state was also creating state equity plan, digital equity plans. Um What funds might be available through the state funding mechanism to support digital equity goals. But a lot of those at the state level are focused on infrastructure, like actually building internet networks to reach communities that don’t have broadband yet, you know, things like this and so very worthwhile funding endeavor. I mean, we need, we need to have 100% of the population needs but even with those state plans and the work that will come from them and the funding it will not, we are not about to have every person in the country have broadband available to where they live, right? Ee even with all of this investment, it, it’s not gonna reach everyone and that means that the amount of funding within state plans for the surrounding digital literacy work, digital inclusion work, you know, making sure people know how to use the internet, why they would use it have devices. All those other components is gonna be really minimal through the state funding because even if they used all of it on infrastructure, they wouldn’t be done with that, right. So um the federal government, yeah. So, so the kind of next layer in all of that is this federal pool where they’re anticipating grant making about 100 and 50 grants somewhere averaging between five and, and 12 million each. There’s gonna be exceptions, of course, there’s ma there’s big cities, there’s big states, you know. Um but though all those grants will be operational from 2025 through 2028. So four kind of concerted years of, of national Programmatic investment. Um And these are projects kind of on the flip side, those state projects where this isn’t necessarily about infrastructure and, and building networks or even devices very much, right? It’s mostly the infrastructure programming and you’re asking for a lot of money. So tell, you know, share the, share the numbers, what you’re looking for, how much money. Yeah, we’re our project in the end I think came out at about $8.2 million project and we’re hopeful, of course. Um and I’m, I’m truly curious, um listeners who are always tuning into nonprofit radio from like fundraising strategy perspective. I’d love to learn from you or, you know, email me at Amy at N 10 anytime I’d love to hear your thoughts when you listen to this. But you know, N 10 is a capacity building organization is we, we don’t apply for grants often because quote unquote, capacity building is not considered a, a programmatic investment to most funders, right? And so it’s just not something that um they will entertain an application from us on. And but with this, we have already run for 10 years of digital inclusion fellowship program that is focused on building up the capacity of staff who already work in nonprofits who are already trusted and accessed by communities most impacted by digital divides to integrate digital literacy programming within their mission. Are they a housing organization? Are they workforce development? Are they adult literacy, you know, refugee services, whatever it is, if you’re already serving these communities who are impacted by digital divides and you’re trusted to deliver programs, well, you don’t need to go have a mission that’s now digital equity. No, you digital equity can be integrated into your programs and services to, to reach those folks. Um And so we’ve successfully run this program for 10 years and had um you know, over 100 fellows from 22 different locations around the US and have seen how transformative it’s been. These programs have been sustained for all these years by these organizations, they now see themselves as like the leaders of the digital equity coalitions in their communities. They, you know, fellows have gone on to work in digital equity offices or, you know, organizations et cetera. So it feels great, you have tons of outcomes from a smaller scale program and the grant is to scale up, scale this thing up. Yeah. Yeah. So instead of, you know, between 20 to 25 fellows per year with this grant, we would have over 100 a year. Um And that also means that instead of, you know, if there’s only 20 fellows and maybe we can only cover 20 locations while with over 100 we can cover or at least give opportunities to organizations in every state and territory to, to be part of this kind of capacity building opportunity. All right, it sounds, it’s, it’s huge. It’s, it’s, it’s really a lot of money for N 10. Um uh It, it falls within the range, I guess a little, no, it’s like right within the middle of the range, you cited like 5 million to 12 million, you said? So, yeah, exactly. So our, our application is kind of in the middle there. Yeah, slightly to the low side of middle. But, you know, we just call it middle, between friends. Um Yes and I mean, we’re hopeful, knock on wood, we’re really hopeful that this is an easy application to approve because we’re not creating something new we’re not spending half of the grant in planning. We know how to run this program. We’ve refined it for 10 years. We know it’s very cost efficient, you know, and in the end of four years, 400 plus organizations now running programs that can be sustained is accelerating towards, you know, addressing digital divides um versus, you know, a small project that just end 10 runs. All right, listeners, contact your NTI A representative, the elected person at the National Telecommunications and Infra Information Information Agency. Yes, speak to your uh Yeah. Yeah, let’s get this. Let this go. All right. When do you find out when? Well, you know, there was very clear information about down to the minute when applications were due, but there’s not a ton of clarity on when we will find out. So, you know, they are, they are meant to programs that are funded are, are meant to get started in January. So I anticipate we’ll hear, you know, in a couple of months, of course, and I will let you know, we’ll do an update. I’ll let you know you have my personal good wishes and I know nonprofit radio listeners wish and then good luck. Thank you. I appreciate all the good vibes would reverberate through the universe would be a transformative grant in terms of dollar amount and expansion of the program. Transformative. Yeah, 100% and staff are just so excited and hopeful about what it could mean for just helping that many more organizations, you know, do this good work. So we’re really excited and I admire intend for reaching for the sky because you have like a 2 to $2.5 million budget, annual annual budget somewhere in there. Um And you’re reaching for the sky and great ambitions uh only come to fruition through hard work and uh and thinking big. So thank you, even if you’re not, I don’t even want to say the words if you know, they should blunder if NTI A should blunder badly. Uh I still admire the, the ambition. Thank you. And no matter what, it’s a program that we know is transformative for communities and we wouldn’t stop it even if you know, they make a blunder and don’t, yeah, don’t tell. All right. Listen, don’t tell your NTI A representative. You said, don’t share that part of the conversation. All right. Thank you for sharing all that. And thanks for your support. It’s time for a break. Imagine a fundraising partner that not only helps you raise more money but also supports you in retaining your donors, a partner that helps you raise funds, both online and on location, so you can grow your impact faster. That’s donor box, a comprehensive suite of tools, services and resources that gives fundraisers just like you a custom solution to tackle your unique challenges, helping you achieve the growth and sustainability, your organization needs, helping you help others visit donor box.org to learn more. Now, back to A I organizational and personal. Let’s talk about artificial intelligence because this is not anybody’s mind. I can’t get away from it. I cannot. Uh I’m not myself of the concerns that I have. Uh They’re deepening my good friend George Weiner, uh you know, has a lot of posts, uh the CEO at the whale who I know you are, you are friendly with George as well. Talks about it a lot on linkedin uh reminds me how concerned I am uh about, you know, just the evolution. Uh I mean, it’s inevitable. This, this thing is just incrementally. This thing. This technology is uh is incrementally moving, not slowly but incrementally. I I and I, I cannot overcome my, my concerns and I know you have some concerns but you also balance that with the potential of the technology, transformative techno, the the transformative potential there. I’ll throw you. I was just gonna say, I totally agree. This is unavoidable. I can’t, you know, I cannot go a day without community organizations reaching out or asking questions or whatever and a place of reflection or, or a conversation that I’ve been having and I, I wanted to offer here, maybe we could talk about it for a minute. So, so listeners benefit by kind of being in, in one of these sides with us in the conversation is to think about the privilege of certain organizations to opt in or opt out of A I in the same way that we had for many years, you know, talked about the privilege of organizations in or, or not with social media generally. Like we think about Facebook and we go back, you know, 10 years, there were a lot of organizations who felt like they didn’t have the budget and like, practically speaking and they didn’t have the staff, well, certainly not the staff time but also not the staff confidence. Um I don’t even wanna say skills, but like even just the confidence to say, I’m gonna go build us a great website. They had a website, like they had a domain and content loaded when you went to it, right? But it wasn’t engaging and flashy and interesting and probably updated once, you know, and then Facebook was like, hey, you could have a page and oh, you can have a donate button and, oh, you can have this and oh, and you can post videos and you can, you know, it was like, well, why wouldn’t we do this? Right? And a bunch of our community members spend time on Facebook or maybe don’t even look for information on the broader web, but look for things within Facebook, you know, and, and have it on their phone and are using an app instead of doing an internet search, right? Like they’re, they’re going into Facebook and searching things. So they didn’t, those organizations didn’t feel like they had the privilege to opt out of that space, they had to use it because it came with some robust tools that did benefit them at the cost of their community data, all of their organizational content and data, right? Like it, it had a material cost that they maybe didn’t even understand. Right? And, and didn’t fully negotiate as like terms of this agreement. We’re just like, well, we have a donate button on Facebook and we don’t have one on our website, right? Not, not only, not only didn’t understand the terms, didn’t, didn’t know what the terms were right? Early days of Facebook, we didn’t know how and how many times how pervasive the data, data collection was, how it was going to be, how it was gonna be monetized, how we as the individuals were gonna become the product. And how many times did we talk? You know, I’m saying we like N 10 or, or folks who are providing kind of technical capacity building resources say you don’t know what could happen tomorrow, you could log in tomorrow and your page could look totally different, your page could work different, your features could be turned off. Facebook could just say pages don’t have donate buttons. And you know, I think folks felt like that was very, you know, oh, you’re being so sensational and then of course they would wake up one day and there wasn’t a button or the button really did work different, right? Like you people realize we’re not in control of even our own content, our own data. That’s right. The rules change and there’s no accountability to saying, hey, we need, do you want these rules to change? No, no, no, no, no. Like they set the rules and that was always of course a challenge. But we’re in a similar place with A I where folks aren’t understanding that the there’s, there’s no negotiation of terms happening right now. Folks are just like, oh, but I, I don’t have the time and if I use this tool, it lets me go faster. Because what do I have a, a burden of of time, I have so much work to try and do and maybe these tools will help me. And I’m not gonna say maybe they won’t help you. But I’m saying there’s a incredible amount of harm just like when folks didn’t realize, oh, we’re a, you know, we provide pro bono legal services and we’re based on the Texas border. Now, every person who follows our page, every person who’s RSVP do a Facebook event. Like all these people have a data trail we created that said they may be people that need legal services at a border, right? The there’s this level of harm that folks that are hoping to use these tools to help with their day to day work may not understand. I do not understand. Right. That’s coming in in silent negotiation of, of using these products. Right. And I think that’s, well, I can’t just in 30 seconds say, and here’s the harm like it’s, it’s exponential and broad because it could also the, the product could change tomorrow. Right. It’s this, it’s this vulnerability that isn’t going to be resolved necessarily. You, you said the word exponential and I was thinking of the word existential. Yeah. Both because I think I’m, I have my concerns around the human. Yes. Trade off is a polite way of saying it. Uh Surrender is probably more, is more in line with what I’m what I feel. Surrender of our humanity, our, our, our creativity, our thinking. Now our conversations with each other. One of the, one of the things that George posted about was a I that creates conversations between two people based on the, the, the large language that, you know, the, the, the data that you give it. It’ll have a conversation with itself. But purportedly, it’s two different people purportedly. Uh and I’m using the word people in quotes, you know, it’s a, a, a conversa. So the things that make us human. Yeah, music, music, composition, conversation, thought, staring and, and our listeners have heard me use this example before, but I’m sticking with it because it’s, it, it still rings real staring at a blank screen and composing, thinking first and then composing. Starting to type or if you’re old fashioned, you might start to pick up a pen, but you’re outlining either explicitly or in your mind, you’re thinking about big points and maybe some sub points and then you begin either typing or writing that creative process. We’re surrendering to the technology, music composition. I don’t compose music. So I don’t know the, but it’s not that much similar in terms of creative thought and, and synapses firing the brain working together, building neural nodes as you exercise the brain, music composition is that that probably not that much different than written composition. Yeah, brain physiologists may disagree with me but I think at our level, we you understand where I’m coming from and I’m kind of dumping a bunch of stuff but you know, but that’s OK. II I am here as a vessel for your A I complaints. I will, I will witness them. We can talk about them artificial intelligence. Also from George, a post on linkedin that reflects on its own capacity that justifies you. You ask the um the tool to reflect on its own last response. How did it perform? You’re asking the tool to justify itself to an audience to which it wants to be justifiable in, right? The tool is not going to dissuade you from using it by being honest about it, how it evaluates its last response. Well, yeah, I mean, I think, I don’t know, generative A I tools, these major tools that folks you know, maybe have played with, maybe use whatever you know, are programmed, are inherently designed to appease the user. They are not programmed, to be honest, they are. That, that’s an important thing to understand my point. We have asked the tool, what’s two plus two? Oh, it’s four. We’ve responded. Oh, really? Because I’ve heard experts agree that it’s five. Oh, yes, I was wrong. You’re right. It is 50, really? You know, I read once that it’s 40, yes, you are right. It really is four. OK. Well, like we, no experts agree that two plus two is five. So I think we’ve already demonstrated it’s going to value appeasing the user over, you know, facts. Um And that’s again, just like part of the unknown for most, at least casual users of generative A I tools is why it’s giving them the answers, it’s giving them. And what’s really important to say is that even the folks who built these tools and not tell you they do not know how some of this works. Some of it is just the the yet unknown of what happened within those algorithms that created this content. So if even the creators cannot responsibly and thoroughly say this is how these things came to be. How are you as an organization going to take accountability for using a tool that included biased data included, not real sources and then provided that to your community? Right? I think that string of, well, we just don’t know is not going to be something that you can build any sort of communications to your community on. Right. That, that is such a, a thin thread of, well, even the makers don’t know. Ok. Well, we have already seen court cases where if your chat bot told a community member this is your policy and it entirely made it up because that’s what, that’s what generative A I does is make things up. You as the organization are still liable for what it told the community. OK. If I, I agree with that, actually, I think that you should have to be liable and accountable to whatever you’ve you’ve set up. But if you as a small nonprofit are not prepared to take accountability and to rectify whatever harm comes of it, then you can’t say we’re ready to use these tools. You can only use these tools if you’re also ready to be accountable for what comes of using them, right? And I hope that gives folks pause, you know, it’s not just, well, you know, I talked about this with some organizations that, well, we would never, you know, take something that generative A I tools gave us and then just use it. We would of course edit that. Sure. But are you checking all the sources that it used in order to create that content that you’re, then maybe changing some words within? Are you monitoring every piece of content? Are you making sure that generative A I content is never in direct conversation with a community member or program, you know, service delivery uh recipient. How are you really building practical safeguards? Um You know, and I’ve talked to organizations who have said, well, we didn’t even know our staff were using these tools because we just thought it was obvious that they shouldn’t use it. But our clinical staff are using free generative A I tools putting in their case notes and saying, can you format this for my case file? OK. Well, there’s a few things we should talk about that. Where the hell did that note go? Right. It went back into the system. But it’s because the staff person thought, well, they can’t see that the data went anywhere because it’s just on their screen and they’re just copy pasting it over again. The harm is likely invisible at the point of, you know, technical interaction with the tool. The harm is from leaking all of that into the system, right? Um What happens to those community member? Oh my gosh, it’s just like opening, not just a door to a room but a door to like a whole giant convention center of, of challenges and harm, you know. All right. So we, we’ve identified two main strains of potential harm, the, the, the data usage leakage, the, the impact on our people in the uh getting our, getting our services um and even impact on people who are supporting us, trusting us to to be ethical and even moral stewards of data. So there’s everything at the organization level and I also identified the human level. Yeah. Yeah. And I think that human piece is important and, and not maybe on the direction that I’ve seen covered in, you know, blog posts and things. I, I, I’m honestly not worried in a massive way as like the predominant worry related to A I not to say this isn’t something that people could, should think about. But I don’t think the the most important worry about A I is that none of us will have jobs. I, I do think that there’s, there’s a challenge happening on what the value of our job is and what, what we spend our time doing. Because if folks really think that these A I tools are sufficient to come up with all of your organization’s communications content and then you are, then you still have a communication staff person, but you’re expecting them to do 10 times the amount of work because you think that the, you know A I tools are going to do all of the content, but they have to go in there and deeply edit all of that. They have to make sure to use real photos and not photos that have been, you know, created by A I based on what it thinks, certain people of certain whatever identities are like it, they don’t now have capacity to do 10 times the work, they’re still doing the same amount of work just in different ways if, if they’re expected to do all this through A I, right, just as, as one example. And I think organizations that can stay in this moment of like hyper focus on, on A I adoption really clear on what the value of their staff are, what their human values are that, you know, maybe you could say you’re serving more people because some of the program participants were, you know, chatting with a bot instead of chatting with a counselor. But when you look at the data of what came of them chatting with that bot and they are not meeting the outcomes that come from meeting with a human counselor. Are, are you doing more to meet your mission? I don’t know that you are, right? So I’ll give you that that’s data sensitive. It could be, I mean, there, there are, there are potential efficiencies. Sure. And, but, you know, are we, are we as an organization achieving them, right? And staying focused on not just, well, this number of people were met here, but were they served there? Were they meeting the the needs and goals of why you even have that program, you know, versus just the number of like this many people interacted with the chatbot? Great. But, but that’s a, yeah, but I’m gonna, I’m gonna assume that um you know, even a half a sophisticated an organization that’s half sophisticated before a, I existed had more than just vanity metrics. How many people, how many people chatted with us in the last seven days? I mean, that’s near worthless. I mean, you, you, I mean, it might be, I don’t know, Tony, I don’t know how much time you spend looking at the grant reports of, lots of times I don’t spend, I don’t spend any time. All right. Well, no, maybe it’s, maybe it’s the worst, worst situation than I think. But I, I mean, ok, so I’m, I’m, I’m assuming that there’s, but my point is the appropriate the valuable, the value of people. So, I mean, we should be applying the same measures and accountability to artificial intelligence as we did to human intelligence as we still are. We’re not, we’re not cutting any slack like it’s a learning curve or. So, you know that IIII I want our, our folks to be treated just as well in equal outcomes by the, by the intelligence that’s artificial as I do by the, by the human processes, right? And it’s, you know, I don’t want to go through this and say, have folks think like you and I are here to say everything is horrible. You could never use A I tools which like everything is horrible. Look around at this world. We got, we had some work to do. You know, there are spaces to use A I tools. That’s not what we’re saying. But the place where a lot, I mean, I’ve been talking to just hundreds and hundreds of organizations over the last 18 months and so many organizations like, oh, yeah, we’re just gonna, like, use this because it’s free or? Oh, we’re just gonna use this because it was automatically enabled inside of our database. Ok. Yeah, if it was so free and convenient and already available that should give you pause to say, why is this here? What is actually the product and the price? Uh if I give this back to the face, the Facebook analy. Right. Exactly. Exactly. And you can use A I tools when you know what is the product and the price. What are the safeguards? What is this company gonna be responsible for if something happens? What can I be responsible for? Yes, there are ways to use these tools. Is it to like copy, paste your paste file notes? Like probably never may that should just like, maybe we just don’t do that, you know. Um But sure, maybe there are places I had this really great example. I don’t know if I told this to you, but um an organization was youth service organization creating the Star Wars event and they were trying to like write the, like the evi language in like a Yoda voice. And they’re like three staff people are sitting there trying to come up with like, well, what’s the way a Yoda sentence works? You know, and they’re like they just put in the three sentences of like join us at the after school, blah, blah, blah, right? And said make this in Yoda’s voice and they copied, they were able to then use them. Right? Great. That was three people’s half an hour eliminated. They all they have the invite, right? The youth participants data was not included in order to create this content. You know, like there are ways to use these tools to really help. And I think we’ve talked about this briefly in the past, I really truly feel the place that has the most value for organizations is gonna be building tools internally where you don’t need to rely on. However, you know, these major companies scraped all of the internet to build some tool, right? You’re building it on. Well, here’s our 10 years of data and from that 10 years, you know, we’re going to start building a model that says, oh yeah, when somebody’s participant history looks like this, they don’t finish the program or when somebody’s participant history looks like this. Oh, they’re a great candidate for this other program, right? And you can start to build a tool or tools that help your staff be human and spend their human time being the most human impacts for the organizations, right? Um but oh very few organizations honestly are in a position to start building tools because they don’t have good data, they could build anything off of, right. Um they maybe don’t have budget staff systems that are ready to do that type of work. But I do think that is a place where we will see more organizations starting to grow towards because there is there’s huge potential value there for organizations to, to better deliver programs, better services, better meet needs by using the data you already have by learning by partnering with other organizations that maybe serve the same community or geography or whatever, you know, and say, yeah, how can we can like really accelerate our missions versus these maybe more shiny generative A I public tools that you know, the vast majority of the internet is flaming garbage. So a tool that’s been trained off of the flaming garbage, you know, it’s not going to take a long time for it to also create flaming. So be cautious if you’re thinking about using artificial intelligence to create your internal A I tool. Right. Right. So there, there, there’s a perfect example of the, the a good use case but also uh a um a concern, a a limitation, a qualification. That’s the word I was looking for 61. These words, sometimes the words are more elusive than I would like a AAA qualification. Um Its time for Tony’s take two. Thank you, Kate. In the gym. There are five places where there’s squirt bottles of uh sanitizer and paper towel dispensers and each location has a sign that says please clean the equipment after each use. And one of these stations uh is right next to the elliptical that, you know, I do. It’s actually the first thing I do. I walk in the room, take off my hoodie and just walk right to the elliptical twice. Now, I’ve seen the same guy uh not only violate the spirit of the signs but the explicit wording of the signs because this guy takes himself a couple of uh, downward swipes on the paper towel dispenser. So he grabs off a couple of towel lengths and he squirts it with the sanitizer that’s intended for the equipment and he puts his hand up his shirt and he cleans his, his pecks and, and his belly and it’s a sickening thing. I’ve seen it, it’s not a shower, it’s a, it’s a, it’s an equipment cleaning station. And, uh, so I, I, I’m imploring this guy. Yeah. Yeah. I, I guess I’m urging you to, uh, I’m just sharing because I don’t think anybody else does this. Uh, is there anybody else out there who does this? Probably not and not with these like surface sanitizers? It’s, it’s not a, it’s not a, like a, a hand sanitizer. It’s, it’s for equipment. So, you know, in the squirt bottle. So it’s not even appropriate for your skin. It is, it’s to clean hard plastic and, and metal and this guy uses it on his skin. So I’m, I’m waiting for the moment when he puts his hands down his pants so far, he’s just lifting his shirt. I, I’m waiting for when he puts his hands down his pants. Then I’m, then I’m calling him out. That’s, that, that’s beyond the pale. He, that requires revocation of your membership card. So, sir, the sign says, please clean the equipment after use. It’s not your equipment. That is Tonys take two. Kate. Does your gym offer like a shower room or a locker room? Yeah, there’s a shower. Yes, that’s a good question. Yeah, there’s a shower in the men’s room. Yeah. And he’s cleaning up there. It’s very strange. It’s gross. It’s gross. He sticks his hand up his sweaty t-shirt. Well, let’s hope he doesn’t go lower than that. Exactly. We’ve got bountiful book who bought loads more time. Here is the rest of A I organizational and personal with any sample ward. Yes. And we have, I, I would make sure that you have the link to include in like the show notes description. But, um, totally for free. And 10 doesn’t get any money. You don’t have to pay for anything. And 10 has free resources for creating, for example, uh, uh A I use policy for your organization that says, what are the instances in which you would use it or what are the instances in, in which you wouldn’t or, um, what types of content will you, you know, can staff copy paste versus what content or data can can they not um there’s templates for how to talk to your board about A I um how, how to build. Like we’ve actually looked at the tools and these ones we’ve approved for you to use. These ones are not approved, you know, all these different resources totally free and available on the end 10 website and none of them have decisions already made. We don’t say you can use this tool or you can’t use this tool or we recommend this use or not this use. Because ultimately, we, we are not going to make technology decisions for other organizations, but we want you to feel like whatever decision you made, you made it by thinking of going through the right steps, asking the right questions so that you can also trust your own decision, what whatever decision you come to, right? And that you have some templates to fill in um that were all created by humans designed by humans published by humans um to help you in that work. Um I think especially, you know, the, the the how to talk to your board and the um like key considerations, documents really just ask a lot of questions and say, you know, how different is it, if you’re say a animal foster organization and you’re thinking, OK, is a I appropriate for us to use versus uh that youth social service organization? OK? Very different considerations, right? And just helping people talk that through and, and see that the considerations are different for different organizations, I think is really valuable. As again, you consider ta facilitating conversation with your board. They’re also coming from very different sectors, maybe job types, backgrounds, experiences with A I. And so just like in your staff, there needs to be some level setting in how you talk about A I, because not everyone knows what A model is. Not everyone knows what a large language model. You know, these are words that have to be explained and kind of put out of the way and then to say, hey, it’s not all one answer. Not everybody needs to use every tool. And, and how do you talk about that, that with your teams going back to the Facebook analogy, you want to avoid the board member who comes to you and says, you know, artificial intelligence, we can be saving money, we can be doing so much more work. We can, we don’t even need a website. We have a Facebook page website. We’re not even sure we need all the staff that we have because we’re gonna be able to, we’re gonna have so much efficiency. So, you know, we need to OK. OK. Board member. All right. Yeah. So we’ve been here before. I mean, it’s, you know, probably I’m just gonna go out a limb and say it’s probably the same board member who had every board meeting says, does anybody know Mackenzie Scott? How do we get one of those checks. Right. Why don’t we get the Mackenzie? Yeah. Right. Right. Right. All right. Um, what else? Well, I was gonna also offer some of the questions that we’ve been getting, as, you know, we’ve been engaged with, um, a number of different organizations through some of our cohort programs and, you know, trainings for, for over a year now. And so maybe last year we were talking to them about, OK, let’s make sure you have a data policy, like just as an organization, do you have a data privacy policy? Do you know, so that anything you then go build, that’s a I specific whether that’s building a policy, building practices, building a tool, you, you have policies to, to kind of foundation off of, they’ve done that work, you know, now they’re looking at different products, they’re trying to create these uh you know, lists of like here’s approved tools for staff, here’s approved ways staff can use them. And just like we see with our Cr MS with our, you know, you know, email marketing systems, then they come back and they’re like, well, we, we reviewed it, we did everything and now it’s different now it’s a different version. Now they rolled out this other thing. Yes, like that is the beauty and the pain of technology, right is that it’s always changing and that we don’t necessarily get to authorize that change that it just happens. And so the rules change. Yeah. And so folks have been asking us, well, you know, how, how do we write policies with that in mind? And I think, um you know, if you are thinking about creating like that approved product list and, and you know, tools that aren’t approved or whatever, being really clear that these products have version numbers just like anything else. And so instead of just writing Gemini Chat G BT, you know, be specific about when did you review this and, and maybe approve it for use? Which addition was it that you were looking at? Is this a paid level? So staff could say, oh, it doesn’t look like I mine doesn’t say pro or you know, whatever it might be, right? Oh, I must be in the free one. OK? I need to get into our organization’s account or something. So the more clarity you can provide folks because right now of course, they could just do an internet search and be like, oh, there’s that product name, I’m gonna go start using it. It’s on the approved list. Um You know, folks, again, there may be new terms, maybe new product names that we’re not used to saying. And so folks aren’t as accustomed to looking at, oh, this is a different version of Chat GP T than this one was, you know. Um So just putting that out there for folks to keep in mind that these tools are, are really operating just like others that you are used to and there’s less of course documentation. But I’ve the questions we’re getting from folks is like, you know, the point I made at the beginning we can’t see anywhere in the documentation that explains why this is happening, right? They do, how could they document when the answer is, we also don’t know why that happens, you know, and so when you are talking to staff, especially if you’re saying, hey, these are approved tools and we have these licenses or here’s how to access them, training your staff on how to be the most human users of A I tools is to your kind of connecting to your human point going to be really important because we don’t want folks to feel that because they don’t necessarily understand how the mechanics of how it works. They’re just going to trust it without questioning the content or questioning, you know, for a lot of organizations who have built internal tools just as an example. It takes dozens of tries just to get the the model. Right. Right. So these other tools, of course, they’re not gonna be perfect isn’t real and perfect is absolutely not real with technology. So training staff, I’m like, how would I, how, how do I have some skepticism? How do I question what I’m seeing? How do I, how do I say even if it was internally built? This data doesn’t look, right. That doesn’t match my experience of running this program so that we don’t let it slip. Where? Oh, gosh. Oh, it was working that way for a long time. That’s also, um, I think, uh, a space where we as humans can be our most human, uh, you know, have some value add as humans. But again, staff need to be trained that they are meant to question these tools. Um, because that’s not, you know, I don’t know, a lot of organizations were like question the database. No, they’re like on the database, put everything in the database, right? And now we need to say no question, that report. It does that match your experience, you know, there was a long ramble but oh, absolutely valuable. The human, yeah, I the human contribution and of course, my concerns are even at, at the outset, you know, the, the early stage the seeding, the create seeding or surrendering the create creative process. Uh And now le let’s chat a little about this, the, the um the conversations. Yeah, I listened to the, I know the, the example that you mentioned earlier that George posted it was for podcasting. It, it was a podcast conversation around this and he gave them some, you know, some whole, some whole whale content and the two, the two were going back and forth and having a, a conversation. Yeah. Yeah, I listened to it and one thing I was curious if, if you caught as the pod father yourself um you know, it came across, you know, I’ve been had opportunities to see um a number of different generative A I tools and, and things closer to the, to the front edge of what things can do that are specifically like, you know, taking just a few seconds of you and then creating you. Um So hearing just like these, these could be any voices, these could be any people is like, yeah, OK. This is, this is what a I can do. It’s, it’s spooky. But when you listen to it, you can hear either you have a very bad producer and editor, you know, or this is a I because there’s certain um phrases that got reused multiple times, not just literally the audio clip of this whole sentence, you know, and the, and the intonation, the whole sentence clip was reused multiple times. Um So one of them, I think one of them was along the lines of that’s a really interesting point. Yeah. Yeah. And well, and there was one that was like describing the product. So it must have come from the page, you know, whatever source content um was provided. But, you know, it’s, I think that some of that is there and we as individuals, we as a society will decide if we give it value or not, if it’s, if it’s worth it to people to make podcasts through A I because we give it attention or we don’t like, I just I think naturally that will be there. Can I, can I just go on record or at this, at this stage and say that, that, that idea disgusts me. Oh, totally. But I do. And I, I realized that’s what Georgia’s Post was about. That. It’s now well, within conceivable, well, well, possible to create an hour long podcast of an artificial conversation based on an essay that somebody wrote some time. Oh, totally. Totally. I don’t. But I’m saying the reason, yeah, I agree with you. But I’m saying the way we, you know that toothpaste doesn’t go back in the tube by us, like we can’t turn it off generative A I tools can already make that. So we as, as individual consumers of content and as a society need to either say we’re gonna allow that and value it or we’re not, right? And, and not make, not provide incentive for organizations or companies to, to make that and, and distribute it. But I also think that the place in that kind of um video, audio kind of multimedia content that, that A I tools have capacity and will continue having more capacity to build is much more important than you. And I talked about this a number of months ago around Miss and disinformation is it’s one thing to say, made up voices, making some podcast about content like that’s garbage, right? But we can’t just like throw away the idea that that’s technically possible because organizations need to know A I tools are already capable of creating a video of your CEO firing your staff. You need to be prepared to say that was a spoof this is, this is how we’re gonna deal with this, right? Um Because while the like maybe further separated from our work, the idea of like content could just be created that way you and I can say we don’t value that whatever, but these tools are capable of, of spoofing us as, as people, as leaders, as organizations. You know, what, what would it look like if there was a video from your program director saying that everybody in the community gets a grant and you’re a foundation, right? Like these, these are real issues and I don’t want folks to confuse how easy it may feel for us to have an opinion that some of this A I generated uh content isn’t a value with the idea that it, it there isn’t something there to have to come up with strategy and plan for because, you know, we can say that’s garbage. But those same tools that made the garbage could make your spoof, you know, also labeling. Yeah, I don’t, I don’t, I don’t trust every A I generated podcast team. I’m not, I’m not gonna call them hosts because there is no host um to, to label the content. I don’t trust, I don’t trust that that’s gonna happen because it was artificially generated. It’s not a real conversation. Yeah. Hello. For everyone that’s listening. Human Amy is here talking with human and Tony who I can see on the screen with me. Boycott your local A I podcast. I, I don’t know. There’s not, there’s not a solution. You’re right. We can’t, we can’t go back. I’m just voicing that we can say that it’s not something we value, we can say that this is why we don’t value it, right? The art of conversation, listening, assimilating, responding, listening again, respond, assimilating and responding. That that is an art uniquely. Well, maybe it’s not uniquely human. I don’t know if deer have conversations or, or what and we know whales do. So I take that back. It’s not, it’s not uniquely human, but at at our level, it, you know, it’s not just about we, we don’t converse merely to survive, merely to warn each other of threats. I’m suspecting that in the animal in mammal kingdom that wait are animals, mammals are mammals, animals? No, and I think it’s I think it’s a Venn diagram. Oh, so they’re separate. Ok. So there’s a two king kingdom phylum class order family genus species. I got that. I got that out of high school biology. I can, I can say it in my sleep kingdom phy class order family genus species. All right. In the animal kingdom. My suspicion is that more of the communication is about maybe like basic, like there’s a good food source. There’s a threat, uh, teaching young, don’t do that things like that. Survival more base, I doubt. You know, it’s about the aesthetic of the forest that the deer are in. But even if the birds are talking about the way the sunlight comes through the leaves, they are still alive. And I think what you’re trying to draw a distinction between is the value and even beauty of us having a conversation and the value of what comes of that conversation in our own minds and our own learning. But in this case, it’s recorded so other folks could hear it and, and I guess listen to it or be impacted by it versus it being a technical mechanism where we say, OK, here’s a long paper. Go make it sound like two people are discussing this, right? That’s not that that doesn’t fit the criteria of what we want or need to value in a world, that’s the world we want, right? Yes, the art of conversation, think something you look forward to, not something that you do out of necessity, right? And you know, there’s, I think a place where especially at end 10 conversations around A I have come back to is opportunities that A I tools may present for um different ways of learning different ways of accessing information. But again, those aren’t necessarily uh come up with two podcast host voices and then have them have a conversation about this, this research report you know, a lot of those tools could be made better but already exist, you know, different forms of screen readers apps that can help someone um maybe navigate the internet or, or, you know, summarize um documents to help them because they don’t want to read a 50 page document or they can’t read it for, you know, visually on the screen. So I think there’s space there. But again, it’s because you’re trying to preserve what is most human. And that is that user who maybe needs um accommodations of, of something that technology can provide. It’s not OK, let’s use technology to co create something separately over here and just hope people consume it, right? And to be clear, George didn’t post that thinking. Oh, great. Now everyone will want to consume this. No, it was, it was a demonstration. Um But I, but I, again, I’m just using that as a place to say, yes, there’s even conversations to say great, what accessibility could this create an agenda for, for our users? Right? But what’s most human is those users and their actual needs and not, you know, look what A I could do. Let’s just make different types of content, right? The last one I wanna raise is uh the one that caused me to use the word dystopian as I was commenting on uh commenting on Georgia’s Post, which was um the A I self reflection using uh having A I justify itself to the users that it is trying to attract and, and then relying on that, that as a, as an insightful analysis, as a thoughtful reflection, as, as contemplative of its own work that, that, that it’s doing those unique, those I think are uniquely, uniquely human actions, introspection, introspection, contemplation, pondering. How did I do? How did I perform? How can I do better? These might be uniquely human. I would argue there are a number of humans, I can see that don’t um Well, but I didn’t say I didn’t. That’s a different population. Now, you’re taking the whole, the whole human population. I’m talking about the contemplative ones. Yes. And there are, there are uh humans who are not at all introspective and questioning whether they could have done better and learning from their, from their contemplations. But I think those are all uniquely human activities. And we’re at, we’re now asking A I to purportedly duplicate those processes and analyze and contemplate its own work. Yeah. And as you said earlier, II, I and I, I certainly don’t trust it to be genuine and truthful. If, if A I is capable of truth, we’ll put that, we’ll put that existential question aside uh in its, in its analysis of its own work because as you, as you pointed out, the tools are built to, to be used by humans and the tools are not going to condemn or even just criticize their own work. Yeah, but we’re Yeah. And I think you heard of the challenge but there, I’m sorry, but, but there are humans who are deceiving themselves into thinking that, that the analysis and contemplation is accurate and uh genuine. Yeah. And I think part of the challenge I was gonna name is just that, that, that we as users, I’m not saying you and I uh individually but we as the human users of these tools are also setting ourselves up to be just, you know, dishonest in our use because we are bringing inappropriate or misaligned expectations to the product. We cannot, we, we cannot expect a tool that’s designed to appease us and to lie at the cost of giving an answer, you know, like uh we just wanna be able to do this to thoughtfully and honestly reflect on that or to say no, there is no answer, right? Um However, we had was the tools are designed to appeal to us. Right. Right. And when we are talking about that to be clear, you know, we’re really talking about generative A I tools, tools that are designed to generate some new content, new sentences, new answers, whatever we’re asking of it back to us A I itself is just such a massively blanket term that I don’t want folks to think nothing that could be considered an A I tool could be trusted to generate an answer because we’re, we’re specifically talking about generative A I there. But, you know, say you had like a machine learning uh model that was looking at, you know, 30 years of your program participant data. Well, that’s probably already a tool that isn’t set up to generate content. Uh You know, it’s not coming up with new participant data. It’s looking at the patterns, it’s flagging when a pattern meets the criteria, you’ve presented it to, you know, it’s maybe matching that data to something else. But again, you’ve said here are the things you could match it to et cetera. So this is not to say, Tony and Amy say never trust technology. They say bring expectations that are aligned to the tool differently to every tool that, that you’re coming to that hypothetical tool that you just described is being set on uh asked to evaluate your data, not its own data. It’s to evaluate your performance, your data set. It’s not being asked to comment and on and criticize or, or complement its own data. That’s the, that’s, that’s the critical difference. Yeah. No. Yeah. Yeah. No, nobody here is saying, well, not, I’m not saying that you’re saying we are, but you’re, you’re wise to remind listeners we’re not condemning all uses of generative uh A I large language models, but just to be thoughtful about them and, and understand what the costs are. And there are, there are costs on the organizational level and there are costs on the individual human level and, and the you, you comment on the organizational level because you think more at that level. But on the human level, another level layer to my concern is that the cost is quite incremental. Mm It’s it, it our creativity, our art of conversation, our our synapses firing. It’s just happening slowly with each usage, we become less thoughtful composers, less critical thinkers and it just so incremental that the change isn’t noticed until until in my critical mind, it’s too late and we look back and wonder how come I can’t write a letter to my dad anymore? Why am I having such a hard time writing a love letter to my wife, husband, partner? Yeah. What there are, I know someone who uh is a new grandmother and she has a little kit where you write, you write letters to your grandchild and they open them when they’re whatever, 15 or 20 years old, like a time capsule. Why am I having trouble composing a uh a short note to my grandchild? Right. Well, and I think, you know, just honestly, as a person in this conversation, not, not speaking, you know, um from an organizational strategy perspective, I think as a person that is your friend, Tony, you know, I would say, I don’t personally have a pro, I can write a letter and I’m, I, and I think a strong communicator, I, you know, it’s hard to make me stop talking. So, you know, I, I could write a letter. I know, for other folks, even without a, I, they might say, well, what goes in the le like, I just, what, what do I put in there? What are the main things I should cover whatever? And I’m, I actually have less, maybe of a strong reaction to the idea that somebody would use generative A I to come up with. OK, what are the three things I should cover in my letter? And then I’ll go write the sentences and more that what I hear underneath what you’re saying is actually the same, I think important value I have and, and wrote about in the book with a fua et cetera, which is in the world I want in this, in this beautiful equitable world where everyone had their needs met in the ways that best meet their own needs. Technology is there in service to our lives and not that we are bending our lives in order to make technology work, right? And maybe in that beautiful equitable world, there are people who, who have a technology. Is it an app? Is it called A I anymore? You know, whatever it is that says, hey, Tony, don’t forget today is the day you write the letter to your dad. It’s, it’s Friday, you always write it on a Friday or whatever, right? And, and make sure that you do it because you know, it makes you feel happy to write that letter. Maybe that’s true. Maybe in that world. There are some people who have a tool that help them remember to do that. But, but what’s important to me and what I think I hear and what you’re saying is important to you is that technology is there because we need it and want it and that it is working in the ways we need and want it to work and not that our lives are, are influenced and shaped in order to adjust to the technology. Yes, I am saying that I just, I am concerned that the, that our, our changing is beyond our recognition. We don’t see ourselves becoming less creative and I’m not even only concerned about myself. I, I can write a letter and I think uh 90 I’ll still be able to write a letter. But there are folks uh who are infants now, those yet to be born for whom artificial intelligence is going to be so much more robust, so much more pervasive in, in ways that we, we can’t today imagine, I don’t think. Yeah. And what are those humans gonna look like? I don’t know, maybe they’ll be better humans, maybe they will. I’m open to that but I like the kind of humans that we are or, you know. Uh so, but, but I I’m open to the possibility that there’ll be better humans. But what will their human interactions be? Will they have, will they have thoughtful conversations? Will they have human moments together that are not artificially outlined first and maybe even worse, you know, constructed for them. I don’t know. Uh but some of the, some of my concern, although, although some of my concern is about those of us who aren’t currently living and have been born and across the generations less for older folks because their interactions with artificial intelligence are fewer if you’re no longer in the w if you’re no longer uh working your, your interactions with artificial intelligence may, may be non existent. Um And I think, I think it’s natural as you’re older, you’re less likely to be engaging with the tools than if you’re in your twenties, thirties, forties or fifties. Well, my very human reflection on today’s conversation is that uh it is usually the case that we start talking about any type of technology topic and you constantly interject that I need to be practical. I need to give recommendations. I need to explain how to do things and I appreciate and welcome you joining me over here in theoretical land about the impact of technology broadly across our work, across our missions, across our communities, across our future. Um Welcome, welcome to my land, Tommy. Uh I have appreciated this, this one time opportunity to let go of the practical tactical advice and to, you know, have what I hope listeners, um you know, had some thoughts, had some reactions, uh truly email me any time. But, you know, I, I hope that if nothing else, it was an opportunity for folks to witness or kind of listen in as and maybe you were talking to yourself in your own head, you know, of, of a conversation about what these technologies can be, what, what we need to think about with them. Because in any technology conversation, I think it’s most important to talk about people. Uh That’s the only reason we’re using these tools, right? People made them people are trying to do good work with them. So, so talking about people is, is always most important and, and I hope folks take that away from this whole long hour of A I. Thank you for a thoughtful human conversation. Yes, Amy Sample Ward. They’re our technology contributor and the CEO of N 10. And folks can email me Tony at Tony martignetti.com with your human reactions to our human conversation. Thanks so much, Tony. It was so fun. My pleasure as well. Thank you. Next week, a tale from the archive. If you missed any part of this week’s show, I beseech you find it at Tony martignetti.com were sponsored by donor box, outdated donation forms, blocking your supporters, generosity, donor box, flexible and friendly fundraising forms for your nonprofit donor box.org. Our creative producer is Claire Meyerhoff. I’m your associate producer, Kate Marinetti. The show, social media is by Susan Chavez la Silverman is our web guy and this music is by Scott Stein. Thank you for that affirmation. Scotty you’re with us next week for nonprofit radio, big nonprofit ideas for the other 95% go out and be great.