Tag Archives: #nptech

Nonprofit Radio for May 26, 2025: Healthier Productivity From AI

 

Jason Shim & Meico Marquette WhitlockHealthier Productivity From AI

Our annual duo returns with tips and resources to make your use of artificial intelligence better for you. They also go beyond AI with many smartphone strategies, inbox management, and Meico shares his shutdown ritual for bedtime. They’re Jason Shim, from Canadian Centre for Nonprofit Digital Resilience, and Meico Marquette Whitlock, The Mindful Techie. This is part of our coverage of the 2025 Nonprofit Technology Conference (#25NTC).

 

 

 

 

 

Listen to the podcast

Get Nonprofit Radio insider alerts

 

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.

Nonprofit Radio for May 5, 2025: PII In The Age Of AI & Balance AI Ethics And Innovation

Kim Snyder & Shauna Dillavou: PII In The Age Of AI

Artificial Intelligence and big data have transformed privacy risks by enabling malicious, targeted communications to your team that seem authentic because they contain highly accurate information. Kim Snyder and Shauna Dillavou explain the risks your nonprofit faces and what you can do to protect your mission. Kim is from RoundTable Technology and Shauna is CEO of Brightlines. This continues our coverage of the 2025 Nonprofit Technology Conference (#25NTC).

 

Gozi EgbuonuBalance AI Ethics And Innovation

Gozi Egbuonu encourages you to adopt Artificial Intelligence responsibly, in a human-centered approach. First, be thoughtful with the threshold question, “Should we use AI?” If you go ahead: Create a thorough use policy; overcome common challenges like staff training and identifying champions; manage change intentionally; and more. Gozi is with Technology Association of Grantmakers. This is also part of our #25NTC coverage.

 

Listen to the podcast

Get Nonprofit Radio insider alerts

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

Welcome to Tony Martignetti Nonprofit Radio, big nonprofit ideas for the other 95%. I’m your aptly named host and the podfather of your favorite hebdominal podcast. Oh, I’m glad you’re with us. I’d turned dromatropic if you unnerved me with the idea that you missed this week’s show. Here’s our associate producer Kate to introduce it. Hey, Tony. Our 25 NTC coverage continues with. PII in the age of AI. Artificial intelligence and big data have transformed privacy risks by enabling malicious targeted communications to your team that seem authentic because they contain highly accurate information. Kim Snyder and Shawna Deleu explain the risks your nonprofit faces and what you can do to protect your mission. Kim is from Round Table Technology, and Shawna is CEO of Bright Lines. Then Balance AI ethics and innovation. Gozi Egbuonu encourages you to adopt artificial intelligence responsibly in a human-centered approach. First, be thoughtful with the threshold question. Should we use AI? If you go ahead, create a thorough use policy, overcome common challenges like staff training and identifying champions, manage change intentionally, and more. Gozi is with Technology Association of Grantmakers. On Tony’s take 2. Tales from the gym in addition to my gratitudes. Here is PII in the age of AI. Hello and welcome to Tony Martignetti Nonprofit Radio coverage of 25 NTC, the nonprofit Technology Conference. We’re all together at the Baltimore Convention Center, where our coverage of 25 NTC is sponsored by Heller Consulting Technology services for nonprofits. Our subject right now is PII in the age of AI. Personally identifiable information in the age of artificial intelligence, safeguarding privacy in a data powered world plus we’re adding in the topic. Alright, already the show’s over. I wanna thank you all for coming. Uh, we’re, we’re here all week. Uh, be sure to tip your servers, um, and we’re adding in the topic a little more privacy please. Colin, diving into data privacy. All right, because, uh, our guests, um. Ask to combine topics which made a lot of sense. Um, but, uh, before I introduce the guest, well, now, let’s do it this way. So we have, uh, stand by there. We have, uh, first is, uh, Kim Snyder. Kim Snyder, um. I gotta take a deep breath. I do, uh, Kim’s title. I’m gonna hyperventilate trying to get enough air to oxygen in. I’m only 140 pounds. I don’t carry enough in my lungs to carry this, to carry this title of virtual digital privacy Project and program officer. You know Joshua Pesca is thanked for that word salad of it’s all nouns. It’s all it’s all one adjective. 12 nouns. Joshua, you’re, you’re out. Anyway, and then CEO doesn’t get any easier. OK. Also with us, uh we have a special guest who’s gonna give a couple of syllables. Uh, let me introduce Miles. Miles, say hello. Hi everyone, it’s Miles with Fundraise up. Thanks Tony. My pleasure. Miles is sponsoring the hub next door at Fundraise Up, so I, I thought I’d give him a little. He asked to give a shout out, so I said sure. And uh they’re giving away free socks. That’s what fundraise Up is all about socks and what else do you do at fundraise. Right, so we help nonprofits raise more money with AI and we do that by not using any identifiable information and are completely compliant across the globe. All right, that’s what a segue and not even reversal incredible. All right, you’ve overstayed your welcome. That’s enough. OK. OK. OK, thank you, Miles. No, thank you. I, he was, I, I did invite him after he pleaded. OK. So we are talking about PII. So Miles, a perfect segue, beautiful segue into personally identifiable information. Uh, Amy, we’re gonna do the overview, so I’m gonna ask Kim. Virtual digital data, virtual digital privacy project and program officer. I’m gonna ask Kim Snyder. No, I’m gonna, no, I’m hitting it hard. Uh, so for an overview, why, why do we, why do we combine these two topics? What are our issues around personally identifiable information and, uh, and artificial intelligence? Kim Snyder. So they both center on the issue of personally identifiable information. So on the one hand we’re talking about what kinds of regulations exist, how do you manage your data I’m too far away. Don’t whisper, Kim. Everybody hears you. Oh, go ahead. I’m waiting. Um, now you, you edit this, don’t count on too many edits. Oh dear, OK, alright, so, um, we’re talking about personally identifiable information which for quite a while for the last couple of NTCs have been talking about this here and. For quite a while it’s been about more about regulation this year I have to say it’s about having our data out there and vulnerability and so looking at data management and how do you start to take stock of your data so that it is less vulnerable and the person the people whose data it belongs to is also less vulnerable and the other topic which I’m here with my co-facilitator um. Uh, Shawna is with all the amens and I’m here. I’m just like I’m a man, yeah, in the, yeah, so, so talking about how that what constitutes personally identifiable information, how much that’s expanded in recent years and Shawna, what’s what’s your bright lines, how are you related to. Yeah, yeah, so Bright Lines, I founded it 4 years ago. We are a doxing prevention company for folks who don’t maybe know what doxing means. Yeah, it’s define it please. When folks will use your personal information or sensitive information, they’ll post it publicly, essentially posting your documents, that’s where doxing comes from with the intent to incite others to do you harm. So there’s like a malevolence there, right? I don’t usually consider it doxing if someone posts like. A relatively available email address from like a professional setting. I do consider it doxing when it’s your personal email address and the intent is to ask others. It could be your birthday, it could be, could be your wife’s or my man right here, yeah. the PII PII is an expanded. No, I never, no, no, actually I came out of US intelligence community. I was there as a much younger person and in a different age in the United States and in terms of our national security. It was really progressive national security person, um. The whole community, yeah, the I I’ll just say the I mean the intelligence community, yeah, yeah, I don’t usually get too granular with that um but the. Was it in the session description it would have said OK yeah we can talk about that. OK, well, I, I’m not sure I’m, I’m pretty sure, but there again it’s one thing when it’s like out on the airwaves. First is when it’s in like a session thing yeah and at at the time when I was there I was detailed out to the DEA this might have been what you read, to train them on finding their targets on the US side of the border of drug trafficking organizations so we were using these same techniques. I was training them in these like techniques to find people. We reverse engineered that now four years ago after the 2020 election when. Folks were going after Ruby Freeman and Shay Moss for just passing a piece of gum while tallying ballots in Georgia they have a penthouse in Manhattan now have the keys to that penthouses. Um, OK, interesting. So reverse engineer I see reverse engineered your, uh your prior prior work. All right. um, so referring to your session description, uh, how AI and big data are transforming privacy risks by enabling aggregation. So your concern is that the, the. Attempts at uh. Spamming people, not spamming but spoofing, fishing, they can, it can be so granular and so accurate that they, they look more and more real. This is a part of our problem, right? OK, and people and agencies, people are using artificial intelligence to gather this information and then and then put it together and collate and then threaten. So they will, so I think we could probably tag team on this. Do you wanna do the production part? So what we see is them gathering data. There’s a lot of data that’s out there about all of us, and I will. If there’s one point folks take away from me talking today in addition to my hype madness, it’s that this is not your fault. Our clients come to us and they say, oh, if I just hadn’t shared so much on public on social media publicly when I was younger and it’s like no no this had nothing to do with you. Your public records are being scraped by data brokers every day. If you own a property, if you’ve ever registered to vote someplace, if you have a driver’s license, which you have to have if you wanna get on an airplane, that data is being sold or scraped. So that’s the data that’s the source data for data brokers. So yeah, sometimes for free, for a, yep, OK, but publicly available, you don’t need to be, not an agency there’s no kind of like legal process to gather it exactly. This is why law enforcement officers, like certain law enforcement agencies now go around legal process and we’ll just buy data from data brokers. Oh, so much easier than defending a subpoena. to prove it to a judge to prove it to a judge and then if this if they move to quash the subpoena, you have to defend it. Exactly. So AI can now gather data from various sources, so it could be used to scrape these sites. It can then be used to connect data. Let me share a story. We got a phone call like a very concerned client. They had just received a phone call themselves from someone who claimed to have. Photos of theirs compromising photos from an old Snapchat account and on the call they described a photo that this that our client knew they’d taken right it was a photo of a room they were describing a room and the clients like, I remember that room. I remember that poster that they’re describing. I think I might have posted it on Instagram one point it was public, but how did they get my number? How do they know where I work and. My response was like, this is a scam. Someone scraped, someone bought a scra of LinkedIn. Maybe they connected that to your phone number. Maybe you have your phone number connected to LinkedIn because you use it from MFA for multi-factor authentication. They connected that to a handle on Instagram, probably using your face, a facial recognition. And then they just made this phone call and talked to you about your employer finding out about these photos, which was a bluff because your employer’s name is listed on your LinkedIn profile. It’s terrifying for her. And Kim has taken it a step further. So you can stitch all this together, right? and you can process all this data at speeds that never were possible before, but you can also use generative tools to create things so you can. Easily mimic a style of someone so you can also so you part of that data that you grab off of LinkedIn or social is somebody’s writing style so you can, you know, generative AI is really great tone and style and also events. So if you’re posting about events and things happening you could get. An email from your purportedly from your executive director or a colleague referencing that event and things that happened and people who were at that meeting it depends on how public the data is and then you know that can be used as a basis for a you know phishing email um that is a lot more convincing phone call yeah or a phone call this person that called our client was a human but they don’t have to be we’ve seen cases where EDs are being impersonated. And it’s video and it’s audio of them that is so convincing to the people that they’re reaching out to and this is it’s trivially easy to do right in our session in fact we had which one is the real Kim and there were two videos of me and one of them was not me um it was AI me but that cost me $29. To take that, so it’s not inaccessible. These tools used to be it used to be like really hard to do this or 25 cents and it’s like a photo in 3 seconds of audio, and they can make those videos, yeah, and you can have me say you don’t even need me saying the alphabet or or Kim’s title for Christ’s sake or half of Kim’s title. I did say you could swear. I didn’t say you could take the name of the Lord. There’s a difference. There’s a difference. There are boundaries even on nonprofit, there are boundaries. This is Chris. I’ve, uh, I’ve gotten, I’ve gotten these, uh. Dear Tony, I know I could have called you at my number or or written to you at my address accurately, uh, but I chose this method instead. So now I know they’ve got my email and my phone and my address, uh, included a picture of my home, which they probably got from Google Maps or, or right, and, uh, I, I some kind of bitcoin bitcoin scam. But how did that make you feel uh the first one I was a little like. Yeah, I was a little nervous, but, but I’ve gotten, uh, we all have gotten Bitcoin scams in the past, but this one had, like, you know, like you’re concerned that amount of information a lot of, yeah, yeah, it had the right and uh I, you know, I, I ignored it with some trepidation and then like a day or two later I got another one and you know I knew I was just kept coming. It was bullshit. Yeah, I saw one of those from one of our threat intelligence partners, someone who swims in this every day, and it terrified him and his wife. Yeah, because it’s so it’s so close to you. It’s why receiving one of those phone calls or back in the, I would say back in the day I got really energized around Gamergate started to try to support the folks who are being targeted by Gamergate. This is back in 2015, and they would describe what it was like to have like, you know, I sleep with my phone next to my bed. And or under my pillow and to have that be the stream of all of this like directed hate messages like you should kill yourself or I’m gonna do this to you or I’m going to do this to your parents or whatever the case might be. It’s so proximate that technology removes what feels like barriers between you and everyone else, and the issue with doxing so terrifying is that you don’t know who it is. It could be anybody. How do you walk down the street? How do you like sleep in your home, not terrified? You don’t know. I never thought about that. Who’s coming after you? Thank you. I never thought you bet new nightmare unlocked. Yeah, no, no, you know how, but Tony, so you get these things because you’re you’re killing me. It’s supposed to be reassuring us here on nonprofit radio. Well, you’re terrifying. We’ll get to that. We will get to that party eventually we’re we’re great parties, but, but, OK, so you’re, you know, more public person, uh, you, you know, nonprofit radio, so, so you. Get these things it’s a little unsettling and unnerving for you, right? yeah like so imagine how like a nonprofit staff person who happens to be working in an organization that may be more targeted by malicious actors, OK, so one is so your staff member starts to experience this and this may this could freak people out, right? So that’s who we’re thinking about. Um, and kind of raising the awareness, OK, yeah, I mean these are folks already dealing with some level of cortisol at a on a regular basis because of work because of their mission. I think we’ve spent enough time on motivation, and let’s let’s, uh, let’s let’s transition, uh, not subtly very abruptly to what the hell do we do? What do we do it already. Is it already too late? It’s never too late. I’m sure you’re not gonna say it’s too late. No, I wouldn’t be here. Yeah, well, I also believe it and I’ve had those moments. Listen, I live in DC and DC DC Health Link had their data leaked and taken a number of years ago and my child who had not even turned a year old had her social security number lost in that breach and I was like, oh man, she’s not a year old, you know, like how is this? This is the world we live in, right? And I turned to my partner and I was like, this is just, I don’t even know why we bother. And she’s like, you can’t, you of all people can’t have that feeling. It’s OK that you do right now, but you have to keep going. No, there are plenty of ways to ameliorate it. Yes, let’s get, let’s get into them. So what we’re with you. Why don’t we start? Go ahead and then we’ll go to Kim. Yeah, I think you can think about this so the individual as the vector to threat to the organization that can be reputational financial threats to the organization could make it hard to fundraise if you don’t support that person very well. Um, you, you would harm your reputation, say, or, um, it could make you look illegitimate to your funders, right? So if you can think about where the risks are to the organization, that’s one set of what to do, right, action items, and I might leave that with you and speak more to the personal. So when it comes to protecting yourself as an individual, there are plenty of ways that you can work to remove your data online was referring to Kim, not me. Oh yeah, no, Tony’s not gonna take that part no Kim’s got that, um, Kim. I won’t try your title um when it comes to the individual, listen, all of us have data out there again it’s not our fault we have lived a life, right? Like we’ve done things it’s, I think it’s a betrayal of trust in our own local governments that they sell this data and no one’s ever asked us for consent they’ve never informed us, etc. etc. etc. OK, so what do you do? You can sign up for one of those services that removes your data from data brokers we consider that like um. Like taking Advil, right? Like it’s like kind of taking care of some of the pain and some of the symptoms. What we also recommend is like looking back to the source data itself. So if you own a property that you live in, we always recommend that people consider moving it into a revocable trust that they don’t name for themselves. You’ve seen too many estate attorneys call it the Tony Martignetti revocable trust. Exactly exactly a different a different name to the revocable trust. That’s it. So now the ownership is obscured its data that’s already out there from prest. This is the argument that our interstate attorney always gives us and we have to educate them on this. They’ll say, oh, but it’s your name’s gonna be on the document granting it to the trust, but your name was there before on tax documents. The way data brokers work is that they’re constantly pulling this data down and renewing their data set. So when the new data comes down at this address, they want the most accurate, the most recent. they’ll overwrite it. So it may be that you lived at that address at one time but you don’t any longer and if someone’s looking for that address, it’s not your name on it. So it will get overwritten, especially over time. What we’ve seen wildly enough is that when that piece comes out, it’s like a house of cards. When you pull that property record out the rest of it tends to fall apart. We see our clients less and less on ownership is kind of a uh. a core or a hub to to other data yeah absolutely yeah I think there’s some connections happening there with like app user data that’s also on an ISP that’s connected to the house, etc. etc. is there other pieces about that location um that create profiles anything else we can do on an individual level besides the uh property ownership. Another big vector is voter data and I know that’s probably not popular in this audience because a lot of folks believe a lot in the voter file and voter data and using it and I, we often see voter data on getting used mm. Getting bought and getting scraped and so we will recommend that folks apply for programs in their states called address confidentiality programs or safe at home programs they’re always set up in with uh survivors of intimate partner violence in mind but a lot of the programs are pretty expansive, so if folks are concerned about stalking or harassment they can also apply and that then gives them a proxy address in some states like in New York across all agencies. So the DMV is now not going to sell your home address and your name. They’re going to sell your your name and your proxy address together. And and shout out the names of those programs that you would look for at your state. Address confidentiality program or safe at home. If you’re interested, the National Network to End Domestic Violence NNEDV.org has a comprehensive up to-date list of those programs. OK, awesome. Kim, uh, before we turn to Kim, uh I think you’re the perfect question perfect question answered. Person, you’re a person, you’re a person. You’re neither a question nor an answer. You’re you’re just a person with a lot of answers. Um, I read once, it’s so hard to unforget, you know, to unlearn things that, uh, the value of, of stolen data is really in the future is more financial like so that the bad actor can act without you tying it to a specific event. So my credit card, let’s say a credit card number is compromised, it’s of more value if it’s 3 years old than if it’s just a couple of weeks it was just stolen a couple weeks ago. Is that true or is that incorrect? I can see that. I can see that being true. Maybe we’ve gotten a little bit better banks and credit cards have gotten better about just reissuing new cards. Websites tend to push you to change your password when they’ve alerted you that there’s a breach, so I, I think. The private companies more so in government agencies but private companies I think have caught on to that a little bit and I think there is some truth if it’s not for financial means but really someone trying to go after you, we call that a ideologically motivated attacker. What we saw you used the word vector before I did, yeah this is my background so they um. What we found with uh a university, a client that’s a university, their students were being targeted. Some of these outside groups showed up to student houses over the summer. The students had already graduated. We’ve gotten some of their address stuff removed. The addresses weren’t available in connection to their names online any longer. So what we think happened was that those addresses that was screenshot and saved. That can happen, yeah, so it’s not a perfect fix. However, what if you have one as an intelligence officer, if you have one data point, so you have that screenshot, but then you have all these other things telling you that Shawna Dilla no longer lives at that screenshot address, you might show up there, but you’re not gonna spend a lot of time on it because you can’t verify it. You can’t confirm it with another source. Makes sense? Yes, thank you, thank you. All right, Kim, let’s turn to you on the organizational level. What, uh, what can we do, uh, there to. Protect ourselves from what’s already out there. How do we help nonprofits and small and midsize are our listeners. Alright, so for many years the the kind of mantra has been to verify, verify, verify verify. I thank you very much, that’s Kim Snyder and Shawna. No, I’m joking. She’s like I’m we’re out of time. No, we’re out of time. Are we out of time? No, I’m only child I fall for jokes very easily. I wish I had known. I wish I had so many. I had so many more. I had so many more in mind for you specifically talking about a targeted attack. Oh my, talk about a vector vector I was coming right at you. I could have written that you’re you’re putting this on the airwaves. You know how vulnerable you are. Oh man, I got all kinds of advantages. All right, I’m sorry, I interrupted you. What was I talking about dying. Go ahead. OK I’m sorry. OK, so we used to talk in cybersecurity world about, you know, verification verify, verify, verify that was the mantra, right? So now we kind of reshape that so that it’s vet and verify so have kind of multiple ways of verifying especially incoming requests. Anything kind of trust your spider sense is what I’d say if something seems a little bit off like what what are we talking about? So if you receive an email, if an email comes and it, you know, it comes from your development director who’s saying who’s referencing something that you just went to the panel or if it comes from accounting, write a check if any money is involved. And it wasn’t like completely expected even if it was a little expected actually I’ve seen I’ve seen this happen where people got into um nonprofit systems and using AI can scan what’s going on very quickly. And then target things that are about to happen from kind of things that are OK, so, so I would, so the instinct instinct, OK, use your, use your instinct but also make it a policy, make it a process that you just follow uncomplicated process for verifying like any financial transaction needs to be verified even if it’s expected, yeah, so yeah, so you wanna walk through that. You just get much, much more deliberate. About verification and and who is it coming from and you don’t want to. Confirming, did you send this email or not replying to the email, but my phone yeah exactly yeah you you send this email about this rush transaction or or routine transaction. Do it in a different format right different channel, yeah, so you know, and even though the instinct may be email back quickly but no right um but then what you do also is create a culture in your organization where that’s OK to do where it’s OK to take that extra 30 seconds minute to you know verify to ask someone for their time to say I just wanna check, did you send this to me? Um, and in that way it’s OK even if it’s because he’s actually director you can say, did you send this to me? I just wanna make sure and so that that’s an OK thing to do. In fact, that’s a good thing to do. Now we can’t they have to be boundaries around this because we can’t do it for every, every message we get so you mentioned. financial financial transactions and no no no not nervous at all financial no no no financial transactions, any kind of initiated correspondence where they’re asking you for something or for some information. I saw a scam recently where the uh an an old employee was trying to be reinstated and wanted to go around HR to IT to get their accounts reset up like I’m I’m coming back and it was like using the person’s middle name so it’s already a little bit fishy but. They went all the way up to the CTO of the of the company and said hey so and so and these people were friends on LinkedIn and like had shared messages back and forth so the attacker knew this was a personal relationship. hey so and so I’m trying to get reinstated. They’re telling me you need to go to HR, but like I but I can do this. I just need to get my account access back up and online and the CTO is like no. Oh bro, you gotta go through HR. I can’t do anything because they had those controls in place, but small and let’s be fair, small and medium sized organizations don’t, so I’ll just take care of it now or we don’t have a, we don’t have a we don’t have any clear guidelines that we give to people for all requests we need to go to HR. I thought of another. Potentially nefarious request you send your logo. Could you, could you, I need a I need a high def for the logo, you know, the, the, the, the JPEG I have is, is not good. I need a high definition logo that could be that could be to produce a check that could be to make a spoof a spare a spoof website, um, OK, I mean, but it seems innocuous send a logo, yeah, it’s very easy to spoof a website, right? So you know, you know, check. Also check where it’s coming from, right? So you know I’ve had an organization where there were two spoofed, um, there’s spoofs on both ends a spoof of the funder, a spoof of the the grantee. Can you tell us more about that story? It’s a really good one. So yeah, so they, they got into an organization’s, um, you know, Microsoft environment. I asked the questions here whoops. Go ahead. Uh oh, off the mic. 3 like 30, go ahead. So, um, Anyway, that’s late in the day. And I’m thirsty. Yeah, late in the day it’s not it’s, it’s well it’s almost 3 o’clock. You’ve been going since then nonstop. Um, anyway, all right. So the organization had someone get into their systems for a very short time, but in that short time they were able to tease out some information again this is AI can help with this kind of analysis short you know canal is a lot of data that it can grab very quickly and um identified some upcoming financial transactions which were rather large and so um in order to kind of trick. The person to sending to the wrong place, they set up fake websites, fake websites for the foundation, fake websites for the grantee, and domains not websites domains, and so then they had emails coming back and forth you could hardly see the difference and so the, the, the real people, the real people were communicating with the bad actor on both sides and the money. And he got sent to the wrong place, OK. Yeah, that was, that was actually no they did great, but, but it was that was a happy ending, but not necessarily. We started with Shawna, so we’re gonna end with Kim. give us oh no we did OK well I’m not Shawna, your mic is down but that she still gets through. She talks and laughs so loud you hear her over Kim’s mic. No, I didn’t, I did not but one more thing before, before we unless we’re totally out of time, um, don’t shoot the messenger. So create a culture. This is another thing that’s any size nonprofit can do where if something happens, if you click on that thing, if you did that thing that you feel like uh. That was really dumb, right? Make it OK to report that and you don’t get in trouble and there’s no shame and blame because it happens so but yeah the the no blame kind of we encourage you to. You know, say it, yeah, call yourself out, yeah, and there’s no punishment, you know, some organizations like they don’t want bad news at the top, so. All right, we’re gonna leave it there, OK? All right. That’s Kim Snyder. Virtual digital privacy project and program officer Roundtable Technology and Shana Dela Vu, CEO CEO Bright lines. Thank you, Kim. Thank you, Shawna. It’s a pleasure. Shawna laughed her ass off. I’m a good sense of humor. All right, I love it. Uh, and thank you for being with a, uh, well, whimsical, I’m not sure it covers it. Raucous maybe, uh, at one point, uh, uh, uh, anarchical because, uh, there was a question that I did not answer. Uh, session. Uh, thank you for being with us at uh 25 NTC for this episode sponsored by Heller Consulting. Technology services for nonprofits, virtual digital privacy project and program officers. It’s time for Tony’s Take-2. Thank you, Kate. A new tales from the gym episode just happened this morning, this very morning. I was minding my own business as I do on the elliptical. And overheard two women talking. One lives here permanently, and the other one who said her name. Sandra Lynn, uh, she lives in North Carolina, but not here in Emerald Isle. She lives, uh. In the Raleigh area, like that’s about 3.5 hours, 4 hours away, roughly. And she was lamenting, Sandra Lan was that uh that she can’t live here full time, house prices are high. And she also still has, uh, her mother and her father-in-law, so her husband’s father are still both alive, and so she needs to stay in that area, but she was, you know, looking forward to retiring here sometime but lamenting that she couldn’t live here now. And that got me thinking as I was on my. 6th or 7th uh interval on the elliptical. I do 88 episode 8, Not episodes. What did I just say? 8 intervals. I do 7 intervals of a minute, take a minute in between, and then the last interval is 2.5 minutes. I was toward the end and it got me thinking, listening to Sandra Lynn. That, uh, I’m grateful that I do live here full time, permanent. This is my home. And that, you know, it’s that there are other people who don’t live here who wish they could, you know, so, uh, you know, I, I add, I have, I have a long list of gratitudes, but I don’t specifically say grateful that I live here in Emerald Isle full time. So I’m gonna add that to my gratitudes that I do every, I guess I’ve told you every 2-3 times a week. I’m adding. Gratitude that I live here in Emerald Isle full time in this beautiful place and I have the ocean across the street. Uh, your own gratitudes. I hope you’re, I hope you’re doing your gratitudes out loud, at least a couple of times a week. That is Tony’s take too. Kate. You do sets. Uh, well, sets are for, yeah, no, that’s different intervals. Intervals on an elliptical, you do a minute hard and then a minute resting. And then a minute hard and a minute resting, it’s called high intensity interval training, HIIT high intensity. It just means you do intervals of things like you sprint, yeah, I don’t run, I’m on elliptical, but you might sprint and then walk, and then sprint and then walk and sprint and walk. Those are called intervals. Sets are like you do 3 sets of 10 if you’re, if you’re on a weight machine or something like that, or maybe pushups, might be 3 sets of 10 or something like that. I don’t know, they seem, there seems to be a different, well, I think the interval is because you’re still active, you’re just resting in between the high intensity intervals. Gotcha. That makes sense? Yes, and I am grateful that you have a beach house. Yeah, because you get to, yeah, you get to visit and uh laze around and uh. What is the word I’m looking for, uh, not schmooze, but, uh, you get to, uh, I don’t know. I can pretend that it’s my beach house. Yeah. You can for a week, yes, but then, then I’m very happy to say goodbye. After a week. Love you too. We’ve got bou but loads more time. Here is balance AI ethics and innovation. Hello and welcome to Tony Martignetti nonprofit Radio coverage of 25 NTC, the 2025 nonprofit Technology Conference, where our coverage is sponsored by Heller Consulting technology services for nonprofits. With me now is Gozi Egwanu. Gozi is director of programs at the Technology Association of Grant Makers. Gozi, welcome to nonprofit Radio. Awesome. Thank you for having me, Tony. Pleasure. You’re welcome. Your session is AI strategy for nonprofits, navigate ethics and innovation. We have plenty of time together, but can you give me a high level view of the the topic and the session that you did? Sure. So the session was really, um, and was really spearheaded by Beth Cantor, uh, and it basically provides uh a balcony view of where we are in the sector in terms of AI adoption, ethical responsible AI adoption, the nonprofit and philanthropy sector. And so, uh, we really start with what we found in the Technology Association of Grantmakers state of Philanthropy tech survey that we did in 2024. In that survey we found what many grant makers are currently doing with AI as far as you know are they testing are they experimenting? Has anyone rolled it out enterprise level, which is, you know, at the organization wide level and what we found is that. And which mirrors quite what we’re seeing in the nonprofit world is that most folks are not using AI in terms of, you know, anything that’s crazy, you know, innovative at this moment it’s really just kind of, you know, meeting summaries, you know, taking notes, that sort of thing, um, and so and but in addition to that we found that while 81% of folks are using AI, uh, sorry, while, uh, oh sorry, 81% are using AI but only 30% have AI use policies, so. You’re using it but you don’t have any guard rails you have no way to tell your teams or your staff, hey, this is what we don’t put into the AI this is what we do put in so you’re really running the risk of having your information potentially used in a way or trained uh an AI model that, um, you know, could potentially put your members at risk, your grantees at risk, whatever the case is for your organization and so. With that little bit of an overview it basically came down to the importance of AI experimentation and really do starting slow starting at the very base level working with your teams to kind of talk through should we use AI if we did use AI what would that be for? So thinking about the use cases, the business, um, the business use like what what would be the business case for it and then you know assembling a nice team of folks, you know, as advisers or experimenters and champions at your organization. Uh, to really kind of help you all start doing that experimentation in a safe and low kind of like low risk way, um, and then from there really defining whether or not AI is your, your next move and then once you do have decide that AI is the next move you wanna move into that next level of the AI maturity which Beth, you know, covers really um really well uh you know you go from that exploration to discovery and then you move into experimentation and ultimately enterprise eventually. Um, but what we’re finding is that most folks are not there yet. They’re still very much experimentation early stage, very early stage, um, and, uh, you get to kind of get to see a case study of it through the work that Lawan did at her organization United Way Worldwide. OK, well, we don’t have with us, but you can provide a lot of context, lot of, lot of detail, I just said you could talk. All right, um, are, are we, do you know the you might not be part of what you surveyed, but was there even intentionality around should we, should the should we use question or did it just kinda happen because people started, people started hearing about it using chat GPT. Well, you know, with one of the questions that we did on the survey, we found that like there’s quite a few folks that are using it in what we call shadow use or shadow AI, which is basically you’re using AI but your organization doesn’t know what you’re using. I see. Alright, so that’s not intentionality at the organization level. No, no, no, I would say not, not. Uh yeah, so we really want to encourage the intentionality which is don’t start using the AI unless you all have that collective organizational conversation of is this something that we should be doing? Is it useful? Is there a business case to go with it? Is it relevant? Does it make sense? Is it safe for our organization? does it align with our ethics? And then consider going into experiments. OK, let’s explore that question a little bit uh now in 2025 because I, I suspect at 26 NTC we won’t be asking the threshold question, should we, should we use? So what, what, what belongs in the conversation if we’re, if, uh if we’re at the stage where Well, uh, individuals may be using it, but we don’t know. Or if nobody’s using it and we’re trying to decide enterprise wide, you know, is there not, we’re not even at the is there a use case like but should we, should we explore it? What goes into that conversation? Sure, um. Again that you know, really thinking about the business case. So when you’re having that conversation about should we use AI, then you have to think about what would be the specific usage of it, right? So say you’re the finance team and you’re considering using AI, what would be the benefit of using AI versus doing the doing the the work flow or process that you currently have and you’re thinking of having AI do? so you really. Kind of have to have that conversation like an in-depth conversation about the process that you’re doing right now. Is there anything wrong with it? Are we losing anything? Could we gain, uh, productivity, time in our days and our schedules if we were to move to using AI to do this one process or this one, this one work flow? Then at that point you think about, OK, maybe we do get a benefit out of it now that we get a benefit out of it. What are some of the things that we have to be concerned about now that we have a benefit is it that now we don’t wanna make sure we wanna make sure that any financial information that could be sensitive to any of our donors or their their personal information, do we not want to have that being able to be, you know, used in the AI model or whatever system that we’re using so you know, you, you start with here’s how we do. Things here’s how AI could potentially benefit and then you move into that conversation. OK, if we did, what are some of the risks and concerns really thinking through all of them as much as you can, we know that you can’t think for every single possibility, but as much as you can kind of write it out and map it out as a group with several folks in the room, the better that you are at being able to say yes or no on moving on with AI as that. Potential new solution. OK, and a part of what goes into this intentionality is a usage, a use policy, your, your, you know, you want us to be thinking about ethical uses. OK, uh, what, what are the, what are, what are the ethical concerns? How can you, how can we talk through those? Well, you know, one of the key ethical concerns is that we know that most AI models that exist now, including open AI, were trained on the internet, and we know the internet can be, uh, wildly biased, wildly biased, filled with lots of terrible things. Not only biased but misinformed, misinformed wrong yeah complete nonsense in a lot of cases, um, and so if you’re using these open AI sources that have been trained on the internet, then you have to be really careful about deciding to use it against, say your theory of change. So if you’re an organization that is er. Be uh vulnerable populations groups that are already kind of under attack, whatever the case is, do you want to have AI making or informing your decisions related to work that you’re doing with these vulnerable groups? More than likely no because the AI may choose to do things that are more in line with the group that is. Biased that may have you know may be unethical and so you want to make sure that whatever you’re using the AI to do that it isn’t putting the organizations and the people that you support and serve in harm’s way so really thinking through, hey, if we’re gonna use it in this way, maybe we need to use it in a way that does not put these groups in harm. Maybe we just focus on using it internally like folks do for the meeting. Notes because that’s a very low risk thing whereas if you’re you know input you know uh decisions about whether or not to continue funding an organization or trying to measure or not whether or not their impact is aligning with your organization’s missions and values some of those those questions are not as clear cut as yes or no, whereas an AI that is trained on purely just wanting to see impact, purely wanting to see a return on investment, which is not always the case of what happens in philanthropy. Then you really have to take, take a step back and say is this the most ethical decision to go forward? Could we be putting organizations in harm? Now you can control what a model is trained on, yes, but that requires something proprietary, right? You have, you have to pay a developer to, uh, to create that. I get I don’t know it’s called a small language model. I don’t know what it’s called, but something that’s trained only on your own data, but your own website, maybe your own documents that you that you provided, but that, that requires a fee and a and a developer. Exactly, it it can it can cost, it can be expensive. The other option is if you don’t want to go the route of creating your own AI you do a paid version because we know the free versions of AI specifically I’ll talk about open AI there’s not a whole lot of freedom or flexibility in turning off the settings to prevent it from training the model on the data that you input. And so in that case you definitely need a use policy because some folks would probably just be like I really need to you know analyze all of this data on all of the groups that we served in this, you know, community that is already really, you know, under attack or potentially in in harm’s way and then now you’re putting that information into the AI to have it, you know, into the free AI to start doing it’s now. and now the AI has all of these people’s information and can now use it to provide it to other people who may look them up or want to find data on. That’s you’ve you’ve shared data that it’s gone. I mean it’s yeah yeah yeah there’s no control. So yes, enormous intentionality, care, um. And what if we don’t have a, you know, we don’t have a, a chief technology officer, chief information officer, you know, it’s an executive director, CEO, and, and maybe decent sized staff. I don’t know, 35, 40 people, but they still don’t have a chief technology officer. How do we, how do we uh ensure the intentionality and care that you’re, that you want us to? Yes, um, there’s a couple of ways, and I think oh good, I think at the core of it you don’t have to have a CTO and even yourself you don’t have to be a technologist. I would never classify myself as a technologist, but we can, there’s ways to find training. There’s plenty of training and 10 it has fantastic training for AI certifications for professionals in in the nonprofit sector, um, and I’d love to share that and 10 and tag are teaming up and we will be offering one for philanthropy professionals very soon. And so these are opportunities, a very, you know, relatively easy ways for people who don’t have that technical background to learn about the AI itself, get themselves familiar familiarized with, you know, what they need to be doing to protect themselves and their staff, ways that they can start to experiment in a safe, you know, safe space, um, so and there’s plenty of also free tools, free education. I will, you know, even I, even though I’ve talked. About OpenAI a lot. OpenAI just announced their OpenAI Academy which has all free resources and tools for learning how to utilize AI for anyone and so there are plenty of free resources out there and people online, you know, uh, there’s plenty of folks on LinkedIn that I see on a regular basis that are sharing information and providing some guidance for nonprofit leaders as well as, uh, folks. That are just not technically inclined so there’s ways that you can kind of upskill and train yourself to understand how to use AI even if you don’t have that technical experience in house. Say a little more about this partnership, can you uh and it’s technical association of grant pardon mechology Association of grants thank you um. Yeah, so I don’t have a whole lot of details to share, but essentially if you’ve, if you’ve used any of the great training and certification resources on the N10 website, we are essentially trying to make a parallel version of that same professional certification for nonprofit leaders using AI for. Our foundation leaders and so uh you can expect really a kind of a similar learning process but however it’ll be tailored to some of the different functions and needs that we find at the philanthropy you know at foundations versus what you would see at a traditional nonprofit. OK, so I’m sorry, it’s intended for professionals I should say. Um, Alright, what, so thank you. You know, that’s important ethical considerations, um, anything more on ethics because, uh, then I I want to talk about the policy, what belongs in your use policy, but is there more about ethical concerns? OK, OK, OK, enormous. I mean if you, if, if you’re exposing your data. And, and it’s gone. It’s, it’s out there like you said, right, um, our use policy that, uh, only 13, 30% have, although 80% are using AI. What goes into this use policy? The use policy essentially just outlines what you and your team should be thinking about before you ever use any AI, so. It’s kind of that no go or go kind of conversation so if it’s sensitive data, if it’s information related to any of your members that you just wouldn’t want anyone to have outside of your organizational members probably wouldn’t want to put it into an AI system so it just kind of outlines, you know, essentially guardrails for for teams and and staff to understand how to best utilize it. And I think some folks are also, you know, thinking about the environmental impacts of using AI are really now making sure that their data use policy or the AI policies are also, you know, having folks be ethical about how they’re using when they’re using AI right? so you know if it’s to do something that could take you probably about the same time that the AI does, don’t use the AI um if you’re just, you know, just tossing anything, any old thing and they’re asking questions all day probably also not a very useful. Use good use of AI you really wanna think about AI very strategically and intentionally, right? You wanna make sure that if you’re going to the AI, it’s for something that you know it’s gonna save you significant amounts of time. One of the things that I often will use AI for is drafting, you know, large descriptions for events. That takes me sometimes hours if I give it to AI, I can do it for me in seconds and the key to descriptions of events, yes, like, so we have webinars events that we have on our website, yeah, so you know I, I, I, I don’t wanna sit there talking about all the learning that you’re gonna get out of it and the objectives and this and that and so AI, I’ve trained, I have like a GPT that is based on kind of like my voice that I provide it like hey here’s the prompt, here’s what I’m kind of looking for. It provides me a draft and then I use that draft and I manipulate it how I want. Um, and so you really wanna make sure that you know when you’re prompting the AI or you’re using the AI, it’s they’ve measured it. I think one prompt uses as much energy. I think it’s like an entire city like it’s crazy. It’s like like it, I, I don’t use my quote me on that, but it’s enormous. There’s quite a bit of energy, and I can actually actually share a link to um one of the stats that came out about it. There’s a researcher that’s been sharing a lot about it, um, and she was just interviewed by, uh, I believe it was Doctor Joy Bullumwini on, uh, by the, um, the. AI justice uh group that she she leads, um, and so there’s a lot of it there’s a lot of energy being used so if you’re gonna use it, you wanna make sure that it’s for something that you don’t need to, you wanna learn prompting good prompting, so you can get what you need out of it and then you can make, you can, you know, refine it and make it better. Sometimes you may have to go back in and ask the AI to refine, you know, what it did, but you really do wanna keep it to a minimum. You don’t wanna be using AI. Constantly because the energy use and the impact on the environment is extreme extreme that gets over to the ethical concerns as well exactly because it’s yeah so yeah you’re you’re just really um basically telling your teams here’s the here’s what we expect out of you when you’re using AI and these are the things that could result in consequences if you don’t follow this policy OK um. What else, anything more about the policy, what, what, what belongs in there? Um, You know, I think the the key things is like you know making your team’s aware of the types of AI that are provisioned because that’s another thing some organizations have taken the decision to block certain AIs that they don’t want you using or even turning off certain AI functions in their uh current tech stack. So, uh, you wanna make sure that it’s really outlined very clearly the types of AI that are in use and also it may, you may wanna include something in there about how you, uh, communicate your use of AI to your teams or other people outside of your organization so. Kind of a, a nice, nice little bucket of what’s internal external, and then also where can you go if AI and where should you not go disclosures to the public um why would there be some uh some platforms or that are that are ruled out? Well, because You know, one of the things that I’ve seen some members talking about within, you know, the tag space is that there are some AI that do not allow you or some systems that do not allow you to turn off the AI function meaning that you don’t have any control of how that AI is taking your data that you have in that tech stack or that tech tool. Oh, you don’t have control no yeah and in in fact there was actually a conversation about a specifically a DAF uh platform that actually. Made this clear to many many many of our members who use it and so that is something that you really have to be concerned about is do you have any level of control if you don’t have any level of control and how the AI is using your data in that system there are organizations that would likely say this is a this is not a system that we would allow you to use. OK, it’s a good example. Um what else uh came out of the session? We still have a couple more minutes together. What else did you talk about in the session that uh that you can share with us? You know, one of the great things that we did was we did these scenarios, um, that Beth Beth put together about, you know, what are some of the things that you would say if you’re in a situation when where, you know, say for instance, uh, your organization is really excited about using AI they wanna jump head first and they just wanna start using AI without, you know, and and they they basically just want you to start rolling it out and get your teams on board. Um, and so in that scenario we really talked through all of the processes, you know, first of all, that first conversation that we talked about, like, should we even use AI that didn’t happen, so that needed to happen. The other part is also, you know, how fast do we wanna roll things out? What are some of the different change management principles that we should be thinking about as a team that could make AI adoption more beneficial and successful so really, you know, starting slow but really starting at the very beginning of like should we or should we not like that should be your because truthfully many organizations do not need AI. It’s true. I mean, it’s just the reality. Some organizations will never probably need to use AI, and then there’s a whole lot of them that probably will. So that question of like, should we do it has to happen first, um, and I think if you’re doing it on your own as a rogue, stop, do it on your own time. You want to practice on it, do it after after hours on a weekend. Exactly, exactly, not on our computers, not on our sisters. Yeah, yeah, if you, and that’s actually one of the things that, um, you know, we’ve seen a lot of our members and foundations, and I think Beth has also seen with, you know, some of the work she’s done in the in the sector is that a lot of foundations are now trying to just get to the staff and say, hey, look, we know that you’re using, can you just tell us and try to make that trust, build that trust with each other and I think that’s gonna be really a good way to help prevent a lot of the issues. Alright, let us know, but then stop. No, there’s no repercussion for reporting yourself, but only, well, only after what you report after the report date, you’re liable. All right, stop it. Exactly. OK, going rogue. All right, um, anything else? Uh oh, questions, any, uh, provocative or memorable questions that came. From the audience I’m trying to think. Um, No, well, you know, the one that had come up was just, uh, you know, there was a, there was someone at the front that had asked about, you know, AI hallucinates, and I was, and, you know, should you hallucinates, yeah, and she and the, the person was basically saying, you know, be careful using it as an organization because it could give you answers that are just factually wrong and so you know our response was like yeah you’re right AI does hallucinate but that’s why it’s incredibly important and I. And I didn’t even say this myself, but at the beginning, which is if you use AI, you always wanna make sure that it’s for something that you have a certain level or high level of expertise or knowledge about. So you know if I’m asking you to write descriptions for me, I know about the event details so that I’m not just gonna let the AI write a description and let it go and put it on the website. Yeah, that sounds good. I’m gonna put it no you review it, you make sure that. The details it’s including are correct. If there’s any statistics or numbers that are being used, you can go and verify those data. So if you’re ever using AI for anything, you should always have a human in the loop. There should be someone that’s able to verify the information, especially if you’re someone that’s not knowledgeable in that specific thing that you ask AI to do. You need someone who is either that or it’s gonna be written at such a high level that it’s maybe that has no value. Exactly, exactly. All right, how about we leave, are you OK leaving it there? Oh, you feel like we covered this? I think we did. OK. All right. All right. Go the Abuno. Euanu Gozi Ebo. Director of programs at Technology Association of Grant Makers. Gozi, thank you very much for sharing all that. Thank you for having me, Tony. My pleasure and thank you for being with Tony Martignetti nonprofit radio coverage of 25 NTC where we are sponsored by Heller Consulting technology services for nonprofits. Next week, 225 NTC conversations to help your fundraising events. If you missed any part of this week’s show, I beseech you. Find it at Tony Martignetti.com. And now the donor box is gone, I miss our alliteration fast, flexible, friendly fundraising forms. Uh, I miss that. All right, well, I am grateful to Donor Box though for 2 years of sponsorship, very grateful, grateful. There’s another gratitude. I’m grateful to Donor Box. Now that they’re not a sponsor anymore, I’m grateful to them. No, I, I’ve been grateful. I just haven’t said it. OK. Our creative producer is Claire Meyerhoff. I’m your associate producer Kate Martignetti. The show social media is by Susan Chavez. Mark Silverman is our web guy, and this music is by Scott Stein. Thank you for that affirmation, Scotty. Be with us next week for nonprofit radio, big nonprofit ideas for the other 95%. Go out and be great.

Nonprofit Radio for September 30, 2024: AI, Organizational & Personal

 

Amy Sample WardAI, Organizational & Personal

Artificial Intelligence is ubiquitous, so here’s another conversation about its impacts on the nonprofit and human levels. Amy Sample Ward, the big picture thinker, the adult in the room, contrasts with our host’s diatribe about AI sucking the humanity out of nonprofit professionals and all unwary users. Amy is our technology contributor and the CEO of NTEN. They have free AI resources.

 

Listen to the podcast

Get Nonprofit Radio insider alerts

I love our sponsor!

Donorbox: Powerful fundraising features made refreshingly easy.

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

And welcome to Tony Martignetti nonprofit radio. Big nonprofit ideas for the other 95%. I’m your aptly named host and the pod father of your favorite abdominal podcast. Oh, I’m glad you’re with us. I’d suffer with a pseudoaneurysm if you made a hole in my heart with the idea that you missed this week’s show. Here’s our associate producer, Kate to introduce it. Hey, Tony, this week A I organizational and personal artificial intelligence is ubiquitous. So here’s another conversation about its impacts on the nonprofit and human levels. Amy Sample Ward, the big picture thinker, the adult in the room contrasts with our hosts, Diatribe about A I sucking the humanity out of nonprofit professionals and all unwary users. Amy is our technology contributor and the CEO of N 10 on Tony’s take two tales from the gym. The sign says clean, the equipment were sponsored by donor box, outdated donation forms, blocking your supporters, generosity, donor box, fast, flexible and friendly fundraising forms for your nonprofit donor box.org here is A I organizational and personal is Amy sample ward. They need no introduction but they deserve an introduction. Nonetheless, they’re our technology contributor and CEO of N 10. They were awarded a 2023 Bosch Foundation fellowship and their most recent co-authored book is the tech that comes next about equity and inclusiveness in technology development. You’ll find them at Amy Sample ward.org and at Amy RS Ward. It’s good to see you, Amy Ward. I do love the Pod Father. I know it makes me laugh every time because it just feels like, I don’t know, like I’m gonna turn on the TV and there’s gonna be like a new, new, new season of the Pod Father where we secretly, you know, follow Tony Martignetti around or something. We are in season 14. Right? Yeah. Um Yes, I appreciate that. You love that. It’s, you know, you like that fun. So uh before we talk about um the part of your role, which is the, the technology contributor to N 10, uh the technology, the nonprofit radio and we’re gonna talk about artificial intelligence again. Let’s talk about the part of your life that is the CEO of N 10 because you have uh have you submitted this major groundbreaking transformative funding federal grant application? Yes, we submitted it last night three hours before the deadline, which was notable because I, I know there were people down to the, the minute press and submit. No, we got it in three hours early to what agency um to NTI A they had, this is kind of all the work that rippled from the digital Equity Act that was passed in Congress a couple of years ago. And, you know, now, you know, better than to be in Jargon jail. What is NTI A, it sounds like an obscure agency of our, of our federal government. It’s not, well, maybe to some listeners it’s obscure but it is, um, the National Telecommunications and Information Administration. And you, I think that’s obscure to about 98.5% of the population, you know, I think I, I think I’m obscure to, you know, uh being obscure is fine. Um Yes, the National Information Administration and um prior to this uh grant um from the federal level where folks from all over were applying, every state was also creating state equity plan, digital equity plans. Um What funds might be available through the state funding mechanism to support digital equity goals. But a lot of those at the state level are focused on infrastructure, like actually building internet networks to reach communities that don’t have broadband yet, you know, things like this and so very worthwhile funding endeavor. I mean, we need, we need to have 100% of the population needs but even with those state plans and the work that will come from them and the funding it will not, we are not about to have every person in the country have broadband available to where they live, right? Ee even with all of this investment, it, it’s not gonna reach everyone and that means that the amount of funding within state plans for the surrounding digital literacy work, digital inclusion work, you know, making sure people know how to use the internet, why they would use it have devices. All those other components is gonna be really minimal through the state funding because even if they used all of it on infrastructure, they wouldn’t be done with that, right. So um the federal government, yeah. So, so the kind of next layer in all of that is this federal pool where they’re anticipating grant making about 100 and 50 grants somewhere averaging between five and, and 12 million each. There’s gonna be exceptions, of course, there’s ma there’s big cities, there’s big states, you know. Um but though all those grants will be operational from 2025 through 2028. So four kind of concerted years of, of national Programmatic investment. Um And these are projects kind of on the flip side, those state projects where this isn’t necessarily about infrastructure and, and building networks or even devices very much, right? It’s mostly the infrastructure programming and you’re asking for a lot of money. So tell, you know, share the, share the numbers, what you’re looking for, how much money. Yeah, we’re our project in the end I think came out at about $8.2 million project and we’re hopeful, of course. Um and I’m, I’m truly curious, um listeners who are always tuning into nonprofit radio from like fundraising strategy perspective. I’d love to learn from you or, you know, email me at Amy at N 10 anytime I’d love to hear your thoughts when you listen to this. But you know, N 10 is a capacity building organization is we, we don’t apply for grants often because quote unquote, capacity building is not considered a, a programmatic investment to most funders, right? And so it’s just not something that um they will entertain an application from us on. And but with this, we have already run for 10 years of digital inclusion fellowship program that is focused on building up the capacity of staff who already work in nonprofits who are already trusted and accessed by communities most impacted by digital divides to integrate digital literacy programming within their mission. Are they a housing organization? Are they workforce development? Are they adult literacy, you know, refugee services, whatever it is, if you’re already serving these communities who are impacted by digital divides and you’re trusted to deliver programs, well, you don’t need to go have a mission that’s now digital equity. No, you digital equity can be integrated into your programs and services to, to reach those folks. Um And so we’ve successfully run this program for 10 years and had um you know, over 100 fellows from 22 different locations around the US and have seen how transformative it’s been. These programs have been sustained for all these years by these organizations, they now see themselves as like the leaders of the digital equity coalitions in their communities. They, you know, fellows have gone on to work in digital equity offices or, you know, organizations et cetera. So it feels great, you have tons of outcomes from a smaller scale program and the grant is to scale up, scale this thing up. Yeah. Yeah. So instead of, you know, between 20 to 25 fellows per year with this grant, we would have over 100 a year. Um And that also means that instead of, you know, if there’s only 20 fellows and maybe we can only cover 20 locations while with over 100 we can cover or at least give opportunities to organizations in every state and territory to, to be part of this kind of capacity building opportunity. All right, it sounds, it’s, it’s huge. It’s, it’s, it’s really a lot of money for N 10. Um uh It, it falls within the range, I guess a little, no, it’s like right within the middle of the range, you cited like 5 million to 12 million, you said? So, yeah, exactly. So our, our application is kind of in the middle there. Yeah, slightly to the low side of middle. But, you know, we just call it middle, between friends. Um Yes and I mean, we’re hopeful, knock on wood, we’re really hopeful that this is an easy application to approve because we’re not creating something new we’re not spending half of the grant in planning. We know how to run this program. We’ve refined it for 10 years. We know it’s very cost efficient, you know, and in the end of four years, 400 plus organizations now running programs that can be sustained is accelerating towards, you know, addressing digital divides um versus, you know, a small project that just end 10 runs. All right, listeners, contact your NTI A representative, the elected person at the National Telecommunications and Infra Information Information Agency. Yes, speak to your uh Yeah. Yeah, let’s get this. Let this go. All right. When do you find out when? Well, you know, there was very clear information about down to the minute when applications were due, but there’s not a ton of clarity on when we will find out. So, you know, they are, they are meant to programs that are funded are, are meant to get started in January. So I anticipate we’ll hear, you know, in a couple of months, of course, and I will let you know, we’ll do an update. I’ll let you know you have my personal good wishes and I know nonprofit radio listeners wish and then good luck. Thank you. I appreciate all the good vibes would reverberate through the universe would be a transformative grant in terms of dollar amount and expansion of the program. Transformative. Yeah, 100% and staff are just so excited and hopeful about what it could mean for just helping that many more organizations, you know, do this good work. So we’re really excited and I admire intend for reaching for the sky because you have like a 2 to $2.5 million budget, annual annual budget somewhere in there. Um And you’re reaching for the sky and great ambitions uh only come to fruition through hard work and uh and thinking big. So thank you, even if you’re not, I don’t even want to say the words if you know, they should blunder if NTI A should blunder badly. Uh I still admire the, the ambition. Thank you. And no matter what, it’s a program that we know is transformative for communities and we wouldn’t stop it even if you know, they make a blunder and don’t, yeah, don’t tell. All right. Listen, don’t tell your NTI A representative. You said, don’t share that part of the conversation. All right. Thank you for sharing all that. And thanks for your support. It’s time for a break. Imagine a fundraising partner that not only helps you raise more money but also supports you in retaining your donors, a partner that helps you raise funds, both online and on location, so you can grow your impact faster. That’s donor box, a comprehensive suite of tools, services and resources that gives fundraisers just like you a custom solution to tackle your unique challenges, helping you achieve the growth and sustainability, your organization needs, helping you help others visit donor box.org to learn more. Now, back to A I organizational and personal. Let’s talk about artificial intelligence because this is not anybody’s mind. I can’t get away from it. I cannot. Uh I’m not myself of the concerns that I have. Uh They’re deepening my good friend George Weiner, uh you know, has a lot of posts, uh the CEO at the whale who I know you are, you are friendly with George as well. Talks about it a lot on linkedin uh reminds me how concerned I am uh about, you know, just the evolution. Uh I mean, it’s inevitable. This, this thing is just incrementally. This thing. This technology is uh is incrementally moving, not slowly but incrementally. I I and I, I cannot overcome my, my concerns and I know you have some concerns but you also balance that with the potential of the technology, transformative techno, the the transformative potential there. I’ll throw you. I was just gonna say, I totally agree. This is unavoidable. I can’t, you know, I cannot go a day without community organizations reaching out or asking questions or whatever and a place of reflection or, or a conversation that I’ve been having and I, I wanted to offer here, maybe we could talk about it for a minute. So, so listeners benefit by kind of being in, in one of these sides with us in the conversation is to think about the privilege of certain organizations to opt in or opt out of A I in the same way that we had for many years, you know, talked about the privilege of organizations in or, or not with social media generally. Like we think about Facebook and we go back, you know, 10 years, there were a lot of organizations who felt like they didn’t have the budget and like, practically speaking and they didn’t have the staff, well, certainly not the staff time but also not the staff confidence. Um I don’t even wanna say skills, but like even just the confidence to say, I’m gonna go build us a great website. They had a website, like they had a domain and content loaded when you went to it, right? But it wasn’t engaging and flashy and interesting and probably updated once, you know, and then Facebook was like, hey, you could have a page and oh, you can have a donate button and, oh, you can have this and oh, and you can post videos and you can, you know, it was like, well, why wouldn’t we do this? Right? And a bunch of our community members spend time on Facebook or maybe don’t even look for information on the broader web, but look for things within Facebook, you know, and, and have it on their phone and are using an app instead of doing an internet search, right? Like they’re, they’re going into Facebook and searching things. So they didn’t, those organizations didn’t feel like they had the privilege to opt out of that space, they had to use it because it came with some robust tools that did benefit them at the cost of their community data, all of their organizational content and data, right? Like it, it had a material cost that they maybe didn’t even understand. Right? And, and didn’t fully negotiate as like terms of this agreement. We’re just like, well, we have a donate button on Facebook and we don’t have one on our website, right? Not, not only, not only didn’t understand the terms, didn’t, didn’t know what the terms were right? Early days of Facebook, we didn’t know how and how many times how pervasive the data, data collection was, how it was going to be, how it was gonna be monetized, how we as the individuals were gonna become the product. And how many times did we talk? You know, I’m saying we like N 10 or, or folks who are providing kind of technical capacity building resources say you don’t know what could happen tomorrow, you could log in tomorrow and your page could look totally different, your page could work different, your features could be turned off. Facebook could just say pages don’t have donate buttons. And you know, I think folks felt like that was very, you know, oh, you’re being so sensational and then of course they would wake up one day and there wasn’t a button or the button really did work different, right? Like you people realize we’re not in control of even our own content, our own data. That’s right. The rules change and there’s no accountability to saying, hey, we need, do you want these rules to change? No, no, no, no, no. Like they set the rules and that was always of course a challenge. But we’re in a similar place with A I where folks aren’t understanding that the there’s, there’s no negotiation of terms happening right now. Folks are just like, oh, but I, I don’t have the time and if I use this tool, it lets me go faster. Because what do I have a, a burden of of time, I have so much work to try and do and maybe these tools will help me. And I’m not gonna say maybe they won’t help you. But I’m saying there’s a incredible amount of harm just like when folks didn’t realize, oh, we’re a, you know, we provide pro bono legal services and we’re based on the Texas border. Now, every person who follows our page, every person who’s RSVP do a Facebook event. Like all these people have a data trail we created that said they may be people that need legal services at a border, right? The there’s this level of harm that folks that are hoping to use these tools to help with their day to day work may not understand. I do not understand. Right. That’s coming in in silent negotiation of, of using these products. Right. And I think that’s, well, I can’t just in 30 seconds say, and here’s the harm like it’s, it’s exponential and broad because it could also the, the product could change tomorrow. Right. It’s this, it’s this vulnerability that isn’t going to be resolved necessarily. You, you said the word exponential and I was thinking of the word existential. Yeah. Both because I think I’m, I have my concerns around the human. Yes. Trade off is a polite way of saying it. Uh Surrender is probably more, is more in line with what I’m what I feel. Surrender of our humanity, our, our, our creativity, our thinking. Now our conversations with each other. One of the, one of the things that George posted about was a I that creates conversations between two people based on the, the, the large language that, you know, the, the, the data that you give it. It’ll have a conversation with itself. But purportedly, it’s two different people purportedly. Uh and I’m using the word people in quotes, you know, it’s a, a, a conversa. So the things that make us human. Yeah, music, music, composition, conversation, thought, staring and, and our listeners have heard me use this example before, but I’m sticking with it because it’s, it, it still rings real staring at a blank screen and composing, thinking first and then composing. Starting to type or if you’re old fashioned, you might start to pick up a pen, but you’re outlining either explicitly or in your mind, you’re thinking about big points and maybe some sub points and then you begin either typing or writing that creative process. We’re surrendering to the technology, music composition. I don’t compose music. So I don’t know the, but it’s not that much similar in terms of creative thought and, and synapses firing the brain working together, building neural nodes as you exercise the brain, music composition is that that probably not that much different than written composition. Yeah, brain physiologists may disagree with me but I think at our level, we you understand where I’m coming from and I’m kind of dumping a bunch of stuff but you know, but that’s OK. II I am here as a vessel for your A I complaints. I will, I will witness them. We can talk about them artificial intelligence. Also from George, a post on linkedin that reflects on its own capacity that justifies you. You ask the um the tool to reflect on its own last response. How did it perform? You’re asking the tool to justify itself to an audience to which it wants to be justifiable in, right? The tool is not going to dissuade you from using it by being honest about it, how it evaluates its last response. Well, yeah, I mean, I think, I don’t know, generative A I tools, these major tools that folks you know, maybe have played with, maybe use whatever you know, are programmed, are inherently designed to appease the user. They are not programmed, to be honest, they are. That, that’s an important thing to understand my point. We have asked the tool, what’s two plus two? Oh, it’s four. We’ve responded. Oh, really? Because I’ve heard experts agree that it’s five. Oh, yes, I was wrong. You’re right. It is 50, really? You know, I read once that it’s 40, yes, you are right. It really is four. OK. Well, like we, no experts agree that two plus two is five. So I think we’ve already demonstrated it’s going to value appeasing the user over, you know, facts. Um And that’s again, just like part of the unknown for most, at least casual users of generative A I tools is why it’s giving them the answers, it’s giving them. And what’s really important to say is that even the folks who built these tools and not tell you they do not know how some of this works. Some of it is just the the yet unknown of what happened within those algorithms that created this content. So if even the creators cannot responsibly and thoroughly say this is how these things came to be. How are you as an organization going to take accountability for using a tool that included biased data included, not real sources and then provided that to your community? Right? I think that string of, well, we just don’t know is not going to be something that you can build any sort of communications to your community on. Right. That, that is such a, a thin thread of, well, even the makers don’t know. Ok. Well, we have already seen court cases where if your chat bot told a community member this is your policy and it entirely made it up because that’s what, that’s what generative A I does is make things up. You as the organization are still liable for what it told the community. OK. If I, I agree with that, actually, I think that you should have to be liable and accountable to whatever you’ve you’ve set up. But if you as a small nonprofit are not prepared to take accountability and to rectify whatever harm comes of it, then you can’t say we’re ready to use these tools. You can only use these tools if you’re also ready to be accountable for what comes of using them, right? And I hope that gives folks pause, you know, it’s not just, well, you know, I talked about this with some organizations that, well, we would never, you know, take something that generative A I tools gave us and then just use it. We would of course edit that. Sure. But are you checking all the sources that it used in order to create that content that you’re, then maybe changing some words within? Are you monitoring every piece of content? Are you making sure that generative A I content is never in direct conversation with a community member or program, you know, service delivery uh recipient. How are you really building practical safeguards? Um You know, and I’ve talked to organizations who have said, well, we didn’t even know our staff were using these tools because we just thought it was obvious that they shouldn’t use it. But our clinical staff are using free generative A I tools putting in their case notes and saying, can you format this for my case file? OK. Well, there’s a few things we should talk about that. Where the hell did that note go? Right. It went back into the system. But it’s because the staff person thought, well, they can’t see that the data went anywhere because it’s just on their screen and they’re just copy pasting it over again. The harm is likely invisible at the point of, you know, technical interaction with the tool. The harm is from leaking all of that into the system, right? Um What happens to those community member? Oh my gosh, it’s just like opening, not just a door to a room but a door to like a whole giant convention center of, of challenges and harm, you know. All right. So we, we’ve identified two main strains of potential harm, the, the, the data usage leakage, the, the impact on our people in the uh getting our, getting our services um and even impact on people who are supporting us, trusting us to to be ethical and even moral stewards of data. So there’s everything at the organization level and I also identified the human level. Yeah. Yeah. And I think that human piece is important and, and not maybe on the direction that I’ve seen covered in, you know, blog posts and things. I, I, I’m honestly not worried in a massive way as like the predominant worry related to A I not to say this isn’t something that people could, should think about. But I don’t think the the most important worry about A I is that none of us will have jobs. I, I do think that there’s, there’s a challenge happening on what the value of our job is and what, what we spend our time doing. Because if folks really think that these A I tools are sufficient to come up with all of your organization’s communications content and then you are, then you still have a communication staff person, but you’re expecting them to do 10 times the amount of work because you think that the, you know A I tools are going to do all of the content, but they have to go in there and deeply edit all of that. They have to make sure to use real photos and not photos that have been, you know, created by A I based on what it thinks, certain people of certain whatever identities are like it, they don’t now have capacity to do 10 times the work, they’re still doing the same amount of work just in different ways if, if they’re expected to do all this through A I, right, just as, as one example. And I think organizations that can stay in this moment of like hyper focus on, on A I adoption really clear on what the value of their staff are, what their human values are that, you know, maybe you could say you’re serving more people because some of the program participants were, you know, chatting with a bot instead of chatting with a counselor. But when you look at the data of what came of them chatting with that bot and they are not meeting the outcomes that come from meeting with a human counselor. Are, are you doing more to meet your mission? I don’t know that you are, right? So I’ll give you that that’s data sensitive. It could be, I mean, there, there are, there are potential efficiencies. Sure. And, but, you know, are we, are we as an organization achieving them, right? And staying focused on not just, well, this number of people were met here, but were they served there? Were they meeting the the needs and goals of why you even have that program, you know, versus just the number of like this many people interacted with the chatbot? Great. But, but that’s a, yeah, but I’m gonna, I’m gonna assume that um you know, even a half a sophisticated an organization that’s half sophisticated before a, I existed had more than just vanity metrics. How many people, how many people chatted with us in the last seven days? I mean, that’s near worthless. I mean, you, you, I mean, it might be, I don’t know, Tony, I don’t know how much time you spend looking at the grant reports of, lots of times I don’t spend, I don’t spend any time. All right. Well, no, maybe it’s, maybe it’s the worst, worst situation than I think. But I, I mean, ok, so I’m, I’m, I’m assuming that there’s, but my point is the appropriate the valuable, the value of people. So, I mean, we should be applying the same measures and accountability to artificial intelligence as we did to human intelligence as we still are. We’re not, we’re not cutting any slack like it’s a learning curve or. So, you know that IIII I want our, our folks to be treated just as well in equal outcomes by the, by the intelligence that’s artificial as I do by the, by the human processes, right? And it’s, you know, I don’t want to go through this and say, have folks think like you and I are here to say everything is horrible. You could never use A I tools which like everything is horrible. Look around at this world. We got, we had some work to do. You know, there are spaces to use A I tools. That’s not what we’re saying. But the place where a lot, I mean, I’ve been talking to just hundreds and hundreds of organizations over the last 18 months and so many organizations like, oh, yeah, we’re just gonna, like, use this because it’s free or? Oh, we’re just gonna use this because it was automatically enabled inside of our database. Ok. Yeah, if it was so free and convenient and already available that should give you pause to say, why is this here? What is actually the product and the price? Uh if I give this back to the face, the Facebook analy. Right. Exactly. Exactly. And you can use A I tools when you know what is the product and the price. What are the safeguards? What is this company gonna be responsible for if something happens? What can I be responsible for? Yes, there are ways to use these tools. Is it to like copy, paste your paste file notes? Like probably never may that should just like, maybe we just don’t do that, you know. Um But sure, maybe there are places I had this really great example. I don’t know if I told this to you, but um an organization was youth service organization creating the Star Wars event and they were trying to like write the, like the evi language in like a Yoda voice. And they’re like three staff people are sitting there trying to come up with like, well, what’s the way a Yoda sentence works? You know, and they’re like they just put in the three sentences of like join us at the after school, blah, blah, blah, right? And said make this in Yoda’s voice and they copied, they were able to then use them. Right? Great. That was three people’s half an hour eliminated. They all they have the invite, right? The youth participants data was not included in order to create this content. You know, like there are ways to use these tools to really help. And I think we’ve talked about this briefly in the past, I really truly feel the place that has the most value for organizations is gonna be building tools internally where you don’t need to rely on. However, you know, these major companies scraped all of the internet to build some tool, right? You’re building it on. Well, here’s our 10 years of data and from that 10 years, you know, we’re going to start building a model that says, oh yeah, when somebody’s participant history looks like this, they don’t finish the program or when somebody’s participant history looks like this. Oh, they’re a great candidate for this other program, right? And you can start to build a tool or tools that help your staff be human and spend their human time being the most human impacts for the organizations, right? Um but oh very few organizations honestly are in a position to start building tools because they don’t have good data, they could build anything off of, right. Um they maybe don’t have budget staff systems that are ready to do that type of work. But I do think that is a place where we will see more organizations starting to grow towards because there is there’s huge potential value there for organizations to, to better deliver programs, better services, better meet needs by using the data you already have by learning by partnering with other organizations that maybe serve the same community or geography or whatever, you know, and say, yeah, how can we can like really accelerate our missions versus these maybe more shiny generative A I public tools that you know, the vast majority of the internet is flaming garbage. So a tool that’s been trained off of the flaming garbage, you know, it’s not going to take a long time for it to also create flaming. So be cautious if you’re thinking about using artificial intelligence to create your internal A I tool. Right. Right. So there, there, there’s a perfect example of the, the a good use case but also uh a um a concern, a a limitation, a qualification. That’s the word I was looking for 61. These words, sometimes the words are more elusive than I would like a AAA qualification. Um Its time for Tony’s take two. Thank you, Kate. In the gym. There are five places where there’s squirt bottles of uh sanitizer and paper towel dispensers and each location has a sign that says please clean the equipment after each use. And one of these stations uh is right next to the elliptical that, you know, I do. It’s actually the first thing I do. I walk in the room, take off my hoodie and just walk right to the elliptical twice. Now, I’ve seen the same guy uh not only violate the spirit of the signs but the explicit wording of the signs because this guy takes himself a couple of uh, downward swipes on the paper towel dispenser. So he grabs off a couple of towel lengths and he squirts it with the sanitizer that’s intended for the equipment and he puts his hand up his shirt and he cleans his, his pecks and, and his belly and it’s a sickening thing. I’ve seen it, it’s not a shower, it’s a, it’s a, it’s an equipment cleaning station. And, uh, so I, I, I’m imploring this guy. Yeah. Yeah. I, I guess I’m urging you to, uh, I’m just sharing because I don’t think anybody else does this. Uh, is there anybody else out there who does this? Probably not and not with these like surface sanitizers? It’s, it’s not a, it’s not a, like a, a hand sanitizer. It’s, it’s for equipment. So, you know, in the squirt bottle. So it’s not even appropriate for your skin. It is, it’s to clean hard plastic and, and metal and this guy uses it on his skin. So I’m, I’m waiting for the moment when he puts his hands down his pants so far, he’s just lifting his shirt. I, I’m waiting for when he puts his hands down his pants. Then I’m, then I’m calling him out. That’s, that, that’s beyond the pale. He, that requires revocation of your membership card. So, sir, the sign says, please clean the equipment after use. It’s not your equipment. That is Tonys take two. Kate. Does your gym offer like a shower room or a locker room? Yeah, there’s a shower. Yes, that’s a good question. Yeah, there’s a shower in the men’s room. Yeah. And he’s cleaning up there. It’s very strange. It’s gross. It’s gross. He sticks his hand up his sweaty t-shirt. Well, let’s hope he doesn’t go lower than that. Exactly. We’ve got bountiful book who bought loads more time. Here is the rest of A I organizational and personal with any sample ward. Yes. And we have, I, I would make sure that you have the link to include in like the show notes description. But, um, totally for free. And 10 doesn’t get any money. You don’t have to pay for anything. And 10 has free resources for creating, for example, uh, uh A I use policy for your organization that says, what are the instances in which you would use it or what are the instances in, in which you wouldn’t or, um, what types of content will you, you know, can staff copy paste versus what content or data can can they not um there’s templates for how to talk to your board about A I um how, how to build. Like we’ve actually looked at the tools and these ones we’ve approved for you to use. These ones are not approved, you know, all these different resources totally free and available on the end 10 website and none of them have decisions already made. We don’t say you can use this tool or you can’t use this tool or we recommend this use or not this use. Because ultimately, we, we are not going to make technology decisions for other organizations, but we want you to feel like whatever decision you made, you made it by thinking of going through the right steps, asking the right questions so that you can also trust your own decision, what whatever decision you come to, right? And that you have some templates to fill in um that were all created by humans designed by humans published by humans um to help you in that work. Um I think especially, you know, the, the the how to talk to your board and the um like key considerations, documents really just ask a lot of questions and say, you know, how different is it, if you’re say a animal foster organization and you’re thinking, OK, is a I appropriate for us to use versus uh that youth social service organization? OK? Very different considerations, right? And just helping people talk that through and, and see that the considerations are different for different organizations, I think is really valuable. As again, you consider ta facilitating conversation with your board. They’re also coming from very different sectors, maybe job types, backgrounds, experiences with A I. And so just like in your staff, there needs to be some level setting in how you talk about A I, because not everyone knows what A model is. Not everyone knows what a large language model. You know, these are words that have to be explained and kind of put out of the way and then to say, hey, it’s not all one answer. Not everybody needs to use every tool. And, and how do you talk about that, that with your teams going back to the Facebook analogy, you want to avoid the board member who comes to you and says, you know, artificial intelligence, we can be saving money, we can be doing so much more work. We can, we don’t even need a website. We have a Facebook page website. We’re not even sure we need all the staff that we have because we’re gonna be able to, we’re gonna have so much efficiency. So, you know, we need to OK. OK. Board member. All right. Yeah. So we’ve been here before. I mean, it’s, you know, probably I’m just gonna go out a limb and say it’s probably the same board member who had every board meeting says, does anybody know Mackenzie Scott? How do we get one of those checks. Right. Why don’t we get the Mackenzie? Yeah. Right. Right. Right. All right. Um, what else? Well, I was gonna also offer some of the questions that we’ve been getting, as, you know, we’ve been engaged with, um, a number of different organizations through some of our cohort programs and, you know, trainings for, for over a year now. And so maybe last year we were talking to them about, OK, let’s make sure you have a data policy, like just as an organization, do you have a data privacy policy? Do you know, so that anything you then go build, that’s a I specific whether that’s building a policy, building practices, building a tool, you, you have policies to, to kind of foundation off of, they’ve done that work, you know, now they’re looking at different products, they’re trying to create these uh you know, lists of like here’s approved tools for staff, here’s approved ways staff can use them. And just like we see with our Cr MS with our, you know, you know, email marketing systems, then they come back and they’re like, well, we, we reviewed it, we did everything and now it’s different now it’s a different version. Now they rolled out this other thing. Yes, like that is the beauty and the pain of technology, right is that it’s always changing and that we don’t necessarily get to authorize that change that it just happens. And so the rules change. Yeah. And so folks have been asking us, well, you know, how, how do we write policies with that in mind? And I think, um you know, if you are thinking about creating like that approved product list and, and you know, tools that aren’t approved or whatever, being really clear that these products have version numbers just like anything else. And so instead of just writing Gemini Chat G BT, you know, be specific about when did you review this and, and maybe approve it for use? Which addition was it that you were looking at? Is this a paid level? So staff could say, oh, it doesn’t look like I mine doesn’t say pro or you know, whatever it might be, right? Oh, I must be in the free one. OK? I need to get into our organization’s account or something. So the more clarity you can provide folks because right now of course, they could just do an internet search and be like, oh, there’s that product name, I’m gonna go start using it. It’s on the approved list. Um You know, folks, again, there may be new terms, maybe new product names that we’re not used to saying. And so folks aren’t as accustomed to looking at, oh, this is a different version of Chat GP T than this one was, you know. Um So just putting that out there for folks to keep in mind that these tools are, are really operating just like others that you are used to and there’s less of course documentation. But I’ve the questions we’re getting from folks is like, you know, the point I made at the beginning we can’t see anywhere in the documentation that explains why this is happening, right? They do, how could they document when the answer is, we also don’t know why that happens, you know, and so when you are talking to staff, especially if you’re saying, hey, these are approved tools and we have these licenses or here’s how to access them, training your staff on how to be the most human users of A I tools is to your kind of connecting to your human point going to be really important because we don’t want folks to feel that because they don’t necessarily understand how the mechanics of how it works. They’re just going to trust it without questioning the content or questioning, you know, for a lot of organizations who have built internal tools just as an example. It takes dozens of tries just to get the the model. Right. Right. So these other tools, of course, they’re not gonna be perfect isn’t real and perfect is absolutely not real with technology. So training staff, I’m like, how would I, how, how do I have some skepticism? How do I question what I’m seeing? How do I, how do I say even if it was internally built? This data doesn’t look, right. That doesn’t match my experience of running this program so that we don’t let it slip. Where? Oh, gosh. Oh, it was working that way for a long time. That’s also, um, I think, uh, a space where we as humans can be our most human, uh, you know, have some value add as humans. But again, staff need to be trained that they are meant to question these tools. Um, because that’s not, you know, I don’t know, a lot of organizations were like question the database. No, they’re like on the database, put everything in the database, right? And now we need to say no question, that report. It does that match your experience, you know, there was a long ramble but oh, absolutely valuable. The human, yeah, I the human contribution and of course, my concerns are even at, at the outset, you know, the, the early stage the seeding, the create seeding or surrendering the create creative process. Uh And now le let’s chat a little about this, the, the um the conversations. Yeah, I listened to the, I know the, the example that you mentioned earlier that George posted it was for podcasting. It, it was a podcast conversation around this and he gave them some, you know, some whole, some whole whale content and the two, the two were going back and forth and having a, a conversation. Yeah. Yeah, I listened to it and one thing I was curious if, if you caught as the pod father yourself um you know, it came across, you know, I’ve been had opportunities to see um a number of different generative A I tools and, and things closer to the, to the front edge of what things can do that are specifically like, you know, taking just a few seconds of you and then creating you. Um So hearing just like these, these could be any voices, these could be any people is like, yeah, OK. This is, this is what a I can do. It’s, it’s spooky. But when you listen to it, you can hear either you have a very bad producer and editor, you know, or this is a I because there’s certain um phrases that got reused multiple times, not just literally the audio clip of this whole sentence, you know, and the, and the intonation, the whole sentence clip was reused multiple times. Um So one of them, I think one of them was along the lines of that’s a really interesting point. Yeah. Yeah. And well, and there was one that was like describing the product. So it must have come from the page, you know, whatever source content um was provided. But, you know, it’s, I think that some of that is there and we as individuals, we as a society will decide if we give it value or not, if it’s, if it’s worth it to people to make podcasts through A I because we give it attention or we don’t like, I just I think naturally that will be there. Can I, can I just go on record or at this, at this stage and say that, that, that idea disgusts me. Oh, totally. But I do. And I, I realized that’s what Georgia’s Post was about. That. It’s now well, within conceivable, well, well, possible to create an hour long podcast of an artificial conversation based on an essay that somebody wrote some time. Oh, totally. Totally. I don’t. But I’m saying the reason, yeah, I agree with you. But I’m saying the way we, you know that toothpaste doesn’t go back in the tube by us, like we can’t turn it off generative A I tools can already make that. So we as, as individual consumers of content and as a society need to either say we’re gonna allow that and value it or we’re not, right? And, and not make, not provide incentive for organizations or companies to, to make that and, and distribute it. But I also think that the place in that kind of um video, audio kind of multimedia content that, that A I tools have capacity and will continue having more capacity to build is much more important than you. And I talked about this a number of months ago around Miss and disinformation is it’s one thing to say, made up voices, making some podcast about content like that’s garbage, right? But we can’t just like throw away the idea that that’s technically possible because organizations need to know A I tools are already capable of creating a video of your CEO firing your staff. You need to be prepared to say that was a spoof this is, this is how we’re gonna deal with this, right? Um Because while the like maybe further separated from our work, the idea of like content could just be created that way you and I can say we don’t value that whatever, but these tools are capable of, of spoofing us as, as people, as leaders, as organizations. You know, what, what would it look like if there was a video from your program director saying that everybody in the community gets a grant and you’re a foundation, right? Like these, these are real issues and I don’t want folks to confuse how easy it may feel for us to have an opinion that some of this A I generated uh content isn’t a value with the idea that it, it there isn’t something there to have to come up with strategy and plan for because, you know, we can say that’s garbage. But those same tools that made the garbage could make your spoof, you know, also labeling. Yeah, I don’t, I don’t, I don’t trust every A I generated podcast team. I’m not, I’m not gonna call them hosts because there is no host um to, to label the content. I don’t trust, I don’t trust that that’s gonna happen because it was artificially generated. It’s not a real conversation. Yeah. Hello. For everyone that’s listening. Human Amy is here talking with human and Tony who I can see on the screen with me. Boycott your local A I podcast. I, I don’t know. There’s not, there’s not a solution. You’re right. We can’t, we can’t go back. I’m just voicing that we can say that it’s not something we value, we can say that this is why we don’t value it, right? The art of conversation, listening, assimilating, responding, listening again, respond, assimilating and responding. That that is an art uniquely. Well, maybe it’s not uniquely human. I don’t know if deer have conversations or, or what and we know whales do. So I take that back. It’s not, it’s not uniquely human, but at at our level, it, you know, it’s not just about we, we don’t converse merely to survive, merely to warn each other of threats. I’m suspecting that in the animal in mammal kingdom that wait are animals, mammals are mammals, animals? No, and I think it’s I think it’s a Venn diagram. Oh, so they’re separate. Ok. So there’s a two king kingdom phylum class order family genus species. I got that. I got that out of high school biology. I can, I can say it in my sleep kingdom phy class order family genus species. All right. In the animal kingdom. My suspicion is that more of the communication is about maybe like basic, like there’s a good food source. There’s a threat, uh, teaching young, don’t do that things like that. Survival more base, I doubt. You know, it’s about the aesthetic of the forest that the deer are in. But even if the birds are talking about the way the sunlight comes through the leaves, they are still alive. And I think what you’re trying to draw a distinction between is the value and even beauty of us having a conversation and the value of what comes of that conversation in our own minds and our own learning. But in this case, it’s recorded so other folks could hear it and, and I guess listen to it or be impacted by it versus it being a technical mechanism where we say, OK, here’s a long paper. Go make it sound like two people are discussing this, right? That’s not that that doesn’t fit the criteria of what we want or need to value in a world, that’s the world we want, right? Yes, the art of conversation, think something you look forward to, not something that you do out of necessity, right? And you know, there’s, I think a place where especially at end 10 conversations around A I have come back to is opportunities that A I tools may present for um different ways of learning different ways of accessing information. But again, those aren’t necessarily uh come up with two podcast host voices and then have them have a conversation about this, this research report you know, a lot of those tools could be made better but already exist, you know, different forms of screen readers apps that can help someone um maybe navigate the internet or, or, you know, summarize um documents to help them because they don’t want to read a 50 page document or they can’t read it for, you know, visually on the screen. So I think there’s space there. But again, it’s because you’re trying to preserve what is most human. And that is that user who maybe needs um accommodations of, of something that technology can provide. It’s not OK, let’s use technology to co create something separately over here and just hope people consume it, right? And to be clear, George didn’t post that thinking. Oh, great. Now everyone will want to consume this. No, it was, it was a demonstration. Um But I, but I, again, I’m just using that as a place to say, yes, there’s even conversations to say great, what accessibility could this create an agenda for, for our users? Right? But what’s most human is those users and their actual needs and not, you know, look what A I could do. Let’s just make different types of content, right? The last one I wanna raise is uh the one that caused me to use the word dystopian as I was commenting on uh commenting on Georgia’s Post, which was um the A I self reflection using uh having A I justify itself to the users that it is trying to attract and, and then relying on that, that as a, as an insightful analysis, as a thoughtful reflection, as, as contemplative of its own work that, that, that it’s doing those unique, those I think are uniquely, uniquely human actions, introspection, introspection, contemplation, pondering. How did I do? How did I perform? How can I do better? These might be uniquely human. I would argue there are a number of humans, I can see that don’t um Well, but I didn’t say I didn’t. That’s a different population. Now, you’re taking the whole, the whole human population. I’m talking about the contemplative ones. Yes. And there are, there are uh humans who are not at all introspective and questioning whether they could have done better and learning from their, from their contemplations. But I think those are all uniquely human activities. And we’re at, we’re now asking A I to purportedly duplicate those processes and analyze and contemplate its own work. Yeah. And as you said earlier, II, I and I, I certainly don’t trust it to be genuine and truthful. If, if A I is capable of truth, we’ll put that, we’ll put that existential question aside uh in its, in its analysis of its own work because as you, as you pointed out, the tools are built to, to be used by humans and the tools are not going to condemn or even just criticize their own work. Yeah, but we’re Yeah. And I think you heard of the challenge but there, I’m sorry, but, but there are humans who are deceiving themselves into thinking that, that the analysis and contemplation is accurate and uh genuine. Yeah. And I think part of the challenge I was gonna name is just that, that, that we as users, I’m not saying you and I uh individually but we as the human users of these tools are also setting ourselves up to be just, you know, dishonest in our use because we are bringing inappropriate or misaligned expectations to the product. We cannot, we, we cannot expect a tool that’s designed to appease us and to lie at the cost of giving an answer, you know, like uh we just wanna be able to do this to thoughtfully and honestly reflect on that or to say no, there is no answer, right? Um However, we had was the tools are designed to appeal to us. Right. Right. And when we are talking about that to be clear, you know, we’re really talking about generative A I tools, tools that are designed to generate some new content, new sentences, new answers, whatever we’re asking of it back to us A I itself is just such a massively blanket term that I don’t want folks to think nothing that could be considered an A I tool could be trusted to generate an answer because we’re, we’re specifically talking about generative A I there. But, you know, say you had like a machine learning uh model that was looking at, you know, 30 years of your program participant data. Well, that’s probably already a tool that isn’t set up to generate content. Uh You know, it’s not coming up with new participant data. It’s looking at the patterns, it’s flagging when a pattern meets the criteria, you’ve presented it to, you know, it’s maybe matching that data to something else. But again, you’ve said here are the things you could match it to et cetera. So this is not to say, Tony and Amy say never trust technology. They say bring expectations that are aligned to the tool differently to every tool that, that you’re coming to that hypothetical tool that you just described is being set on uh asked to evaluate your data, not its own data. It’s to evaluate your performance, your data set. It’s not being asked to comment and on and criticize or, or complement its own data. That’s the, that’s, that’s the critical difference. Yeah. No. Yeah. Yeah. No, nobody here is saying, well, not, I’m not saying that you’re saying we are, but you’re, you’re wise to remind listeners we’re not condemning all uses of generative uh A I large language models, but just to be thoughtful about them and, and understand what the costs are. And there are, there are costs on the organizational level and there are costs on the individual human level and, and the you, you comment on the organizational level because you think more at that level. But on the human level, another level layer to my concern is that the cost is quite incremental. Mm It’s it, it our creativity, our art of conversation, our our synapses firing. It’s just happening slowly with each usage, we become less thoughtful composers, less critical thinkers and it just so incremental that the change isn’t noticed until until in my critical mind, it’s too late and we look back and wonder how come I can’t write a letter to my dad anymore? Why am I having such a hard time writing a love letter to my wife, husband, partner? Yeah. What there are, I know someone who uh is a new grandmother and she has a little kit where you write, you write letters to your grandchild and they open them when they’re whatever, 15 or 20 years old, like a time capsule. Why am I having trouble composing a uh a short note to my grandchild? Right. Well, and I think, you know, just honestly, as a person in this conversation, not, not speaking, you know, um from an organizational strategy perspective, I think as a person that is your friend, Tony, you know, I would say, I don’t personally have a pro, I can write a letter and I’m, I, and I think a strong communicator, I, you know, it’s hard to make me stop talking. So, you know, I, I could write a letter. I know, for other folks, even without a, I, they might say, well, what goes in the le like, I just, what, what do I put in there? What are the main things I should cover whatever? And I’m, I actually have less, maybe of a strong reaction to the idea that somebody would use generative A I to come up with. OK, what are the three things I should cover in my letter? And then I’ll go write the sentences and more that what I hear underneath what you’re saying is actually the same, I think important value I have and, and wrote about in the book with a fua et cetera, which is in the world I want in this, in this beautiful equitable world where everyone had their needs met in the ways that best meet their own needs. Technology is there in service to our lives and not that we are bending our lives in order to make technology work, right? And maybe in that beautiful equitable world, there are people who, who have a technology. Is it an app? Is it called A I anymore? You know, whatever it is that says, hey, Tony, don’t forget today is the day you write the letter to your dad. It’s, it’s Friday, you always write it on a Friday or whatever, right? And, and make sure that you do it because you know, it makes you feel happy to write that letter. Maybe that’s true. Maybe in that world. There are some people who have a tool that help them remember to do that. But, but what’s important to me and what I think I hear and what you’re saying is important to you is that technology is there because we need it and want it and that it is working in the ways we need and want it to work and not that our lives are, are influenced and shaped in order to adjust to the technology. Yes, I am saying that I just, I am concerned that the, that our, our changing is beyond our recognition. We don’t see ourselves becoming less creative and I’m not even only concerned about myself. I, I can write a letter and I think uh 90 I’ll still be able to write a letter. But there are folks uh who are infants now, those yet to be born for whom artificial intelligence is going to be so much more robust, so much more pervasive in, in ways that we, we can’t today imagine, I don’t think. Yeah. And what are those humans gonna look like? I don’t know, maybe they’ll be better humans, maybe they will. I’m open to that but I like the kind of humans that we are or, you know. Uh so, but, but I I’m open to the possibility that there’ll be better humans. But what will their human interactions be? Will they have, will they have thoughtful conversations? Will they have human moments together that are not artificially outlined first and maybe even worse, you know, constructed for them. I don’t know. Uh but some of the, some of my concern, although, although some of my concern is about those of us who aren’t currently living and have been born and across the generations less for older folks because their interactions with artificial intelligence are fewer if you’re no longer in the w if you’re no longer uh working your, your interactions with artificial intelligence may, may be non existent. Um And I think, I think it’s natural as you’re older, you’re less likely to be engaging with the tools than if you’re in your twenties, thirties, forties or fifties. Well, my very human reflection on today’s conversation is that uh it is usually the case that we start talking about any type of technology topic and you constantly interject that I need to be practical. I need to give recommendations. I need to explain how to do things and I appreciate and welcome you joining me over here in theoretical land about the impact of technology broadly across our work, across our missions, across our communities, across our future. Um Welcome, welcome to my land, Tommy. Uh I have appreciated this, this one time opportunity to let go of the practical tactical advice and to, you know, have what I hope listeners, um you know, had some thoughts, had some reactions, uh truly email me any time. But, you know, I, I hope that if nothing else, it was an opportunity for folks to witness or kind of listen in as and maybe you were talking to yourself in your own head, you know, of, of a conversation about what these technologies can be, what, what we need to think about with them. Because in any technology conversation, I think it’s most important to talk about people. Uh That’s the only reason we’re using these tools, right? People made them people are trying to do good work with them. So, so talking about people is, is always most important and, and I hope folks take that away from this whole long hour of A I. Thank you for a thoughtful human conversation. Yes, Amy Sample Ward. They’re our technology contributor and the CEO of N 10. And folks can email me Tony at Tony martignetti.com with your human reactions to our human conversation. Thanks so much, Tony. It was so fun. My pleasure as well. Thank you. Next week, a tale from the archive. If you missed any part of this week’s show, I beseech you find it at Tony martignetti.com were sponsored by donor box, outdated donation forms, blocking your supporters, generosity, donor box, flexible and friendly fundraising forms for your nonprofit donor box.org. Our creative producer is Claire Meyerhoff. I’m your associate producer, Kate Marinetti. The show, social media is by Susan Chavez la Silverman is our web guy and this music is by Scott Stein. Thank you for that affirmation. Scotty you’re with us next week for nonprofit radio, big nonprofit ideas for the other 95% go out and be great.

Nonprofit Radio for June 10, 2024: Future-Proof Your Nonprofit With Apps, Tools & Tactics

 

Jason Shim & Meico Marquette Whitlock: Future-Proof Your Nonprofit With Apps, Tools & Tactics

Jason Shim and Meico Marquette Whitlock return from the 2024 Nonprofit Technology Conference, with their annual collection of tech to help you manage your tech. From collaboration to inbox management, from transcription to hidden Zoom tools, this panel will help you find greater balance and efficiency. Jason is at the Canadian Centre for Nonprofit Digital Resilience and Meico is The Mindful Techie.

 

 

 

 

 

Listen to the podcast

Get Nonprofit Radio insider alerts!

I love our sponsors!

Virtuous: Virtuous gives you the nonprofit CRM, fundraising, volunteer, and marketing tools you need to create more responsive donor experiences and grow giving.

 

Donorbox: Powerful fundraising features made refreshingly easy.

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

Hello and welcome to Tony Martignetti Nonprofit Radio. Big nonprofit ideas for the other 95%. I am your aptly named host and the pod father of your favorite abdominal podcast. Oh, I’m glad you’re with us. I’d suffer with Jejune. Oily, JJ June. Well, uh JJ June, Jun Ol, did you know I, I did you know ill. Did you know? I welcome to Tony Martignetti Nonprofit Radio. Big nonprofit ideas for the other 95%. I’m your aptly named host and the pod father of your favorite abdominal podcast. Oh, I’m glad you’re with us. I’d suffer with Je Juno iliitis if I had to digest the idea that you missed this week’s show. But first we have a listener of the week, Sharry Smith or it could be Chary. It’s either Cherry or Sherry. I’m gonna say Sherry, but it could be Cherry Sharri Smith from Portland, Oregon. Sharri gave us a shout out on linkedin when she was listing her favorite podcasts for nonprofits, Shari Shari Shari Shari. Thank you. Thank you very much for doing that. Uh You had a couple of other podcasts listed. Um Yeah, they, they’re, they’re good, you know, II, I know them. Uh hm. Ok. They’re good. But nonprofit radio is on the list. That’s the one that’s the one you want. So Sharry Cherry, thank you very much listener of the week this week. Thank you so much, Sherry. Here’s our associate producer, Kate with what’s going on this week? Hey, Tony, congratulations, Sherry. This week we have future proof your nonprofit with apps, tools and tactics. Jason Shim and Miko Marquette Whitlock return from the 2024 nonprofit technology conference with their annual collection of tech to help you manage your tech from collaboration to inbox management, from transcription to Hidden Zoom tools. This panel will help you find greater balance and efficiency. Jason is at the Canadian Center for nonprofit Digital Resilience and Miko is the mindful techie on Tony’s take two chatty gym guy were sponsored by virtuous. Virtuous, gives you the nonprofit CRM fundraising volunteer and marketing tools. You need to create more responsive donor experiences and grow, giving, virtuous.org and by donor box, outdated donation forms blocking your supporters, generosity, donor box, fast, flexible and friendly fundraising forms for your nonprofit donor box.org. Here is Future Proof your nonprofit with apps tools and tactics. It’s a pleasure to welcome back, Jason Sim and Miko Marquette Whitlock to nonprofit radio. Jason is Chief Digital Officer at the Canadian Center for nonprofit digital resilience. How can we harness technology to make a difference in the world? That’s the question. Jason loves to explore with organizations. He’s on linkedin and the center is at CCNDR dot C A Miko Marquette Whitlock is the mindful techie. He’s a workplace well being strategist who helps mission driven professionals prioritize their well being so they can elevate their well doing. He’s also on linkedin and his practice is at Mindful techie.com, Jason and Miko. Welcome back to Nonprofit Radio. Thanks for having us for having us. This is a tradition. Uh We, we didn’t get to talk at the nonprofit technology conference proper, but uh we’re, we’re, we’re filling in now uh with your annual sort of review of apps techs and uh uh apps, apps uh tools and tactics. And it’s actually quite appropriate, I believe because we’re recording on uh May 1st. It’s May Day in much of the world, not celebrated so much in the US and Canada uh celebrating uh labor organizing and labor rights. However, in the US and Canada, nonprofit workers certainly uh have a right to the apps, tools and tactics that will make their work more balanced and productive. So I’m sure you agree, you both agree with that, right? We can come to terms on that. So, so why don’t we proceed? Uh Let’s go alphabetically by uh first name. So Jason, uh you go first, you get to introduce the first uh apps tools and, and tactic. Yeah. So thank you, Tony. Uh So before we jump into the, the tools, I think uh you know, 11 thing that we definitely, you know, keep in mind when talking about tools is, you know, the concept of tiny gains. And so, you know, having um the uh kind of perspective that, you know, uh one to the power of 365 is one, but 1.01 to the power of 365 is 37.7. And so, you know, that that notion of improving bit by bit, you know, 1% each day is kind of how we look at these tools that, you know, it’s not going to be, you know, uh you know, one tool isn’t necessarily going to solve all the things. But, you know, we’re going to be sharing uh various tools that can, you know, help kind of nudge things, you know, 1% at a time, you know, in that regard. So the incremental growth is uh is valuable, of course. Yeah, totally. So, so you know what the first tool that we want to highlight for folks is that it’s, it’s a tool that is actually built into uh Microsoft Word already. But uh um folks may not necessarily be aware of it in as much detail. Um And that’s that, that there’s actually a built in transcription tool, right in Microsoft Word. And so you can actually um uh when you find a little microphone icon on it that you can dictate directly into word, but you can also upload files uh into it. And so that, that’s a feature that’s not as well known uh that you can upload up to five hours per month per user inter software for automated transcription that is bundled in uh with your uh word online. And where is this microphone found? Maybe I’ve seen it 1000 times. I just haven’t noticed it. Where, where do we find the microphone? So typically it’ll be in the menu bar in the uh the, the top right hand corner uh of uh of Microsoft Word online. So if you’re accessing it in the browser, uh it should be in the top right hand uh uh section. OK. We had a session uh uh uh a panel at uh at NTC about tools that you already have that you may very well not be using. And they covered uh all of the office, office 365 has a lots of, lots of value in there that, that people don’t know about. And also Google, a lot of very uh free Google tools. Um Yeah, we, we, they focused on Microsoft 365 and, and Google. Um interesting, you know, they had like a dozen things that, that are, that you’re already paying for or getting for free and you’re just not, you, you just don’t know that they’re there, they’re hidden. So, OK. So consistent with that is the uh the transcription tool. OK. So up to five hours, you said up to five hours a month. Yes. And it’s, it’s super handy if you’re doing, you know, things like uh interviews or if you’re doing um uh uh yeah, the meeting notes, you know, those types of things, you know, some other tools already have, you know, the transcription part built in. But in the specific, you know, use case where, you know, you may be doing, uh say uh a program interview or something like that, that this is an additional functionality that’s uh that’s in there. So you’re uploading the audio file. Yes. All right. Interesting. I know, I know a guy who does a podcast. Uh II I believe he already has a transcription process in place. But uh I have to re evaluate because uh maybe this is, maybe this one is simpler. Uh And I don’t know if he’s paying for the uh that transcription process. I, I can’t recall what he’s, what he’s been doing for the past several years around transcription. But uh I’ll have him look into it. Miko. Welcome back, Miko. Good to see you. Good to see you. Thanks for having me. My pleasure. Congratulations. Also on your new book, uh which you and I will be talking about in a couple of weeks. We’ll, we’ll get you back on exclusively for uh how to thrive when work doesn’t love you back. Thank you. I appreciate it. Yes, I know you and I will be talking in a couple of weeks. Um What’s next? What’s next on our uh hit parade of uh apps, tools and tactics. Well, so if you want to stick on the theme of things that people aren’t using, I would add Zoom to this category. And if I could, I wanna walk through just a few things that I think are pretty interesting in terms of developments with, with Zoom. So similar to Microsoft Office and Google workspace. Um Zoom is one of those platforms or tools that many organizations are using. Many folks have at least a basic paid subscription at the organizational level. And um these are tools that are constantly evolving. And so sometimes because we’re using them all the time, we’re not aware that there are certain features that have been added. Um So I’ll, I’ll share, I’ll just go through these really quickly. So the first is for lots of organizations, you know, it’s important that when people are identifying themselves, important, people identify pronouns. And so one of the ways people have been doing that is they modify the name in the way that that shows up and they add their pronouns. Uh But now for folks that didn’t know Zoom actually has a dedicated pronoun field, so you don’t have to do that. So um if your administrator has enabled this in the web-based um login for Zoom, um You can set that by default for your, for your own profile. Um And you don’t have to update your name when you pop into zoom to do that. It’ll your, your pronouns will appear automatically. So I think that’s one cool feature um where, where that’s appropriate and where that uh makes sense for folks to, to look into that uh another option. And this is about accessibility. This is also about making sure that um we recognize that people learn and process differently. And so you and I are talking uh we’re able to see each other um Right now as we’re recording this interview. Uh But for some folks, maybe they process differently and they actually need to be able to see the, the captioning, they maybe need to see some form of a transcript to be able to follow along and process information. And so for that, there is automated captioning that is built into zoom. Now, it’s not 100% perfect, but it’s there. Um It’s uh essentially computer generated captioning. Um There are a lot of languages that are covered by default in your basic paid description and similar to the pronouns field, your administrator or whoever is managing your Zoom account for your organization and log into the web based portal and enable this feature if it’s not already uh enabled. And what this allows folks to do is if someone is in a meeting um and they need access to um to, to close captioning that particular individual can enable that um for themselves. And for folks who don’t need it, they can leave it turned off or if they’re turned on by default, people can turn it off, uh, completely up to you as the user in terms of what, what you need. Um, I’ll share one final, um, thing and then we’ll toss it back to you, Tony. So one final thing that I think is pretty cool. So, uh, you know, many of us have probably seen like the news, um, where they’re doing the weather report and you have this, this person standing in front of a screen and they’re pointing here and they’re pointing there generally, what’s happening is this person is standing in front of a, a green screen and their technical team is projecting the images. So it looks like this person is actually pointing to a map of where you know, fill in the blank wherever you are, right? Well, you can actually simulate that when you’re presenting. So many of us are used to traditional screen share where you share your slides or you share your screen and then a box if you appear in a different place in zoom, but you can also share your screen in such a way that you are overlaid on top of your slides so that you have that weather man or weather person effect as well. Um Now the caveat here is you have to be aware of how this is showing up your lighting. In some cases, you might actually need to have a green screen in order for this to be effective. And it’s gonna require you to format your slides or your presentation a bit differently because obviously you’re taking up now a slice of the screen in addition to the content that you have on the slides. But a really interesting way to create a different type of experience um for folks that, you know, you are meeting with um or doing a webinar with, I think you can have some fun with that. Uh Like at the beginning of a webinar, like you’re immersed in your slides, you know, I’m, I’m surrounded by my valuable content. Uh you know, you could have, but I don’t know, can you uh can you control where you are or how much of the frame you get in proportion to your slides behind you or whatever, whatever the content is, I guess let’s just use slides. Yeah. So you like, I’ll be in the lower left so that you would make format your slides so that the lower left is always blank and you can do it that way you choose where you would have to test this out and, and, and, and try it out. So there are some limited features that allow you to do this directly into zoom. Um The other which is thinking about physically positioning yourself in reference to the camera. So you think about where you’re going to physically be and try that out, test it out for yourself. The final thing I’ll say is there are third party tools that work with Zoom that do this better. But nonetheless, I just wanted to point out that Zoom does have this built in feature for folks that at a basic level that want to try it out and have some fun with it. OK? So that’s the uh so pronouns can be uh automated, automatically populated, you don’t have to go and change your screen name. Um There’s the automated captioning and then the what is this, this background feature called? It’s called, it’s called set powerpoint as virtual background. OK. It’s aptly named, said powerpoint as virtual background. All right. Um Are you guys familiar with or, or recommending any zoom alternatives? I mean, there’s, I know there’s teams, of course, uh although most people iii I don’t know, 95% of the meetings I’m in are on Zoom. Uh Are you, are either of you finding alternatives to zoom for any, for any reason or, or something that other folks are using? That’s valuable. So what, what comes to mind is uh well, this isn’t necessarily an alternative to zoom for the video conferencing itself. Uh There, there is a tool called the O BS uh that if folks are looking for advanced uh video streaming capabilities, uh that uh O BS actually sits as an additional layer to your video feed so that you can further customize some of the green screen effects or being able to move um yourself uh uh into a corner, you know, like uh like Miko described, but it uh it gives a ton of functionality as well and it kind of sits as in um sits in between your video feed and something like Zoom or Teams or Google Meet and allows you to, you know, do all sorts of things like you could um the directly do like a picture in picture, you know, type thing if you want to do like an advanced video broadcast. So O BS is used quite a bit by uh uh streamers and such. But that, that’s uh uh definitely a tool that uh if folks are looking to explore uh for some more advanced functionality, uh it’s called uh O BS and it’s uh uh available and free. Oh OK. Cool and live streaming, live streaming. Yes. Yeah. Not just for live streaming. It’s also like if we’re on a Zoom call, that isn’t necessarily be live stream that you can activate it and it will uh you know, it will help you control, you know, some of the outputs of your video feed that way. So yeah, O BS stands for open broadcaster software. So essentially go to build on what they are sharing. So going back to the example of the weather person, if you wanted to create a highly produced and polished uh you know, presence that doesn’t look like a typical, you know, zoom or uh teams meeting um screen or presentation, you could add lower thirds, you can change a ho there are just so, so many different cool things that you can do using what Jason was was saying. Um And to pick up on, you know, your original question around alternatives. So to my, in my awareness, um you know, the Zoom, there’s Google Meet and teams I think are probably the top three and there’s, there’s a good reason for that, you know, there’s, there’s broad compatibility across um platforms and devices. Um And there’s, you know, some built in trust there that organizations have in terms of those, those big three. that, that said, I think it’s also a good point to make in terms of thinking about when we think about the expansion of tools we don’t have to use necessarily the the latest and greatest. Everything doesn’t have to be high tech, right? Everything there, there’s a, there’s room for mid tech and low tech and when it comes to meetings and collaboration, sometimes we forget what happened when we didn’t have these tools, right? We picked up the phone, we, we met in person. Um we had conference lines where people called in and we, we couldn’t have seen each other. You know, when I first started working in the tech space, I work with colleagues and manage teams that of people that I actually never met in person. And, you know, we would have phone calls and conference conference line um conversations and meetings. And that was the the primary way that we communicated and collaborated. And we’re seeing that in terms of digital wellness, there’s actually a benefit to that, right? Because we’re spending too much time in front of our screens. There’s research that shows that it increases, you know, cortisol levels and increases stress levels. And over time, too many of those back to back video mediated types of collaboration actually reduce engagement, reduce productivity. Um And actually, in some cases can be counterproductive. So we wanna be able to find that balance and also recognize that, hey, depending on what your intention is, what your outcome that you’re trying to get to sometimes just having an old fashioned phone call or working asynchronously um can be just as effective or sometimes more effective to get to where it is that you’re trying to go. Thank you for that. That, that’s a valuable reminder. Uh Yeah, because you can feel like, you know, unlike an in person meeting with, with many people, several people, you know, you can feel like you’re being stared at uh or, you know, you’re not, but people are looking at their screen, they’re not necessarily looking at you, but it looks to you like everybody’s looking at you and uh I can see why that interesting that cortisol levels rise like by the after a few hours of this, I guess or. Absolutely. And, and what you just described is a very real phenomena. So part of it is um the self view, right? You seeing yourself on screen and being self conscious about that and you can turn this off in zoom. So this is another feature you can you can turn off self view. So if you click on the three dots on your particular image, there’s an option that should be allowing you to turn off your self view if that’s an issue. Um The other thing, others are still so others are still seeing you. So you’re not, you’re not stopping your video, but uh you’re stopping your own self view. OK? And uh one final consideration here is that there’s, there’s research from Stanford University that actually shows that um this particular phenomena that you just talked about is, is compounded for women. Uh because we have different expectations about how women are presentable and show up on screen. And oftentimes there’s this um un assessed costs that we, you know, we just take for granted that that in some cases, women have to do more work in order to be what we think, what we deem of as presentable, right? So if you’re requiring folks to be on camera all the time, that sometimes is one of the the side effects if you’re not aware of that. Excellent. Yeah, valuable reminders. Thank you. It’s time for a break. Virtuous is a software company committed to helping nonprofits grow generosity. Virtuous believes that generosity has the power to create profound change in the world and in the heart of the giver. It’s their mission to move the Needle on global generosity by helping nonprofits better connect with and inspire their givers. Responsive. Fundraising puts the donor at the center of fundraising and gross giving through personalized donor journeys that respond to the needs of each individual. Virtuous is the only responsive nonprofit CRM designed to help you build deeper relationships with every donor at scale. Virtuous gives you the nonprofit CRM, fundraising, volunteer marketing and automation tools. You need to create responsive experiences that build trust and grow impact, virtuous.org. Now back to future proof your nonprofit with apps tools and tactics. Jason, you have, you have something else for us. I know you do. That’s a rhetorical question. When I’m talking to me and Jason, we can go on for hours. It’s purely rhetorical question. Absolutely. So, uh the next step is, uh you know, along the lines of we talked about transcription earlier is uh some of the text to speech tools. So they’ve developed quite a bit in the last few years. And you know, the there’s a few that are out there that I’ll rhyme off, you know, Natural Reader 11 labs. And uh nr and so, uh in particular, um I’ve used Naet before and it’s super helpful when you need to record uh voice greetings. Um And if anyone’s ever experienced, you know, having to record a voice greeting and say like for a voicemail and you have to do like 20 some odd takes to do it that, you know, something like Naki can, can help streamline that. Now the, the special thing is that they have a lot of uh voice models that are available in different languages as well. So it really nails uh some of the uh the accents around the world as well. So, you know, uh given that I live in Canada that when recording, um some of the uh greetings for an organization that it uh uh it needs to be delivered in English and Canadian French. And they have specifically Canadian French models as well as many other language models uh uh for the, the text of speech so that we’re able to provide a script in both English and French and that it’ll read it off and, you know, you’re able to get it in one take rather than having to do, you know, 10 or 20 takes or trying to get multiple people to coordinate around the recording of that. So that’s an example of a tool that can help streamline uh in that regard. And, you know, there’s many other, you know, potential use cases that one was called is called N Yes, like parakeet with an N N and also natural reader and 11 laps. Sorry. What’s the last one? 11 labs? 11 labs. Yeah, and similar to the, uh you know, text speech and also uh editing you know, audio files and text uh is uh there’s a few different tools that, that do this, but the one that I, I’ll name specifically, um you know, with this functionality is a descript. Uh So descript is a tool that can help with things like uh let’s say if you have a transcript, um we have an audio file that you’ve uploaded to uh to descript is that they can do filler word removal. So, what it does is that it imports an audio file and then it produces a transcript for you and you can edit it like a word document and it’ll detect things like if you say, um ah and it’ll, uh it can automatically help you remove some of that. So, you know, let’s say if you added a few extra words and you’re speaking and you can edit the text that is uh in descript and it’ll automatically remove it from the audio file. Uh So, uh it without having to, you know, uh splice the audio file itself and looking at the waveforms that you can do it as uh edit the transcript and it’ll give you a clean audio file afterwards. That to me is, that’s incredible that you can edit it as text and then it, it goes and does the, applies those edits to the audio file? Yeah, it’s good. Uh I, I don’t know. To me that’s amazing. I don’t, I, you know, descript, it’s called D the letter D script. Yes, de D E. Yeah. And there’s additional functionality as well uh in, in the script called overdub. So let’s say if you are recording something and you left out a word that overdub can actually fill in the word for you. Uh If you, you trained a voice model to do it. So let’s say, you know, if I intended to say um you know, an extra word or a phrase or something that, you know, similarly you can enter in, you know, the word that you intended to say and it’ll uh fill it in for you so that it can uh it sounds seamless uh there. So, you know, for something like a podcast or, you know, whatever other um uh function that, you know, folks may be looking to accomplish there is that it really helps streamline some of the audio editing process rather than having to rerecord an entire section against placing and placing and everything that uh you can uh just, you know, type in the word and it’ll drop it in for you uh in uh your voice. So it learns from the rest of the file, how to pronounce the word or words that you’ve just inserted into the text file. You, you, you, you, you may have to do some training on it where you um Yeah. Uh but it, it does use E I voice cloning to uh to replace some of the uh the audio there. Damn. And, and I’ve used this, so I’ve used it for my podcast. My podcast producer uses this and um it’s, it’s, it’s saved us so much time. And as a matter of fact, Jason was one of the folks that we interviewed for the first season and we use this software to clean up our episode with, with Jason. Yeah, that’s incredible because I do some of that work uh in my own post production. But I’m, I’m using audacity. But like you said, Jason, I’m looking at the wave forms. Uh Anyway, it’s, it’s eminently doable. You just have to make sure you have the right spot and you play it a few times to make sure you have exactly what you want to take out. Uh but it’s not nearly as swift as text editing and, and, and you can’t add unless you go, you know, go record again and then the, the ambient noise is never gonna sound the same as it did on the day you recorded even sitting here at my same studio office, the, the ambient sound is different. Uh Wow, I, I’ll add one more thing in terms of how this works. And so you, Jason is right in terms of being able to edit the transcript. And so let’s just say, using that example, if I thought Jason gave a long, would it answer or maybe Jason said something? And he’s like, oh, well, actually, I don’t want to share that with my employer, can you take this out? Right. We can go in and edit the transcript that way. Uh, and then for the fillers, you can set it so that you don’t have to go in and manually remove the fillers. You can just tell it which fillers you want to remove and it does, it automatically, um, for you. And depending on which uh, subscription level you have, you can fill in, you can do not just fillers, but maybe there are specific keywords that are significant to you and your audience or to how you wanna edit that you want to remove. You can train the software to remove those specific things automatically. Oh, that’s very robust. That’s remarkable. I think uh descript. OK. Cool Miko. What’s next? All right. So I wanna talk about collaboration tools and training tools. So I do a lot of training for organizations, a lot of things I do virtually. And Google Jam Board is a digital um white board that I used. Um And I still use. But unfortunately, um Google is winding down that particular offering. They’re getting rid of it as of this fall. And so I, I wanna talk about a few alternatives for folks that either have been using Google Jam Board um and are looking for alternative white board tools or maybe you haven’t been using white boards and you just want to get, you know, you wanna come to the party, you wanna be a part of the all the fun. So I wanna give you three really quickly. Um The first is Fig Jam. So this is a white board tool by the company fig A uh F I MA is uh in the space think of think of Adobe. For example, uh a lot of designers use their UX tools and, and web developers use their UX tools to design products and design websites. But they have the separate product, Fig Jam, which is specifically for uh white boarding. And one of the ways that I use white boarding tools is if I have people doing exercise where maybe we brainstorming together, you know, we can the same way that you’re in the room in person and you get people stickies and people stick them on the wall or they sort them into different buckets, you can do the same thing, but you can do this essentially virtually. Um And so Fig Jam is one of those tools you have mirror uh which is another tool that’s in this bucket. And I know that the the Intend team actually uses this a lot. I know that when Jason and I served on the board at Intend, um that was a tool that we used a couple of times as part of our collaboration and you know, strategic planning process um to be able to do that virtually. And then the final tool going back to Zoom for a moment um is zoom has a white board feature. So for folks that weren’t aware of that and you’re, you’re not using it. Zoom has a whiteboard feature um and tied to the Zoom also has an annotation feature where you can annotate things on the screen. Um Both as the presenter, you can also have I I in my presentations, I sometimes have questions on the screen or like a scale and I’ll have uh participants use the annotation tool to indicate where they are on a scale. People can write on the screen or in this case, you know, if you want to use the Zoom whiteboard feature, you could do it that way as well. But those are three alternatives to Google Jam boards. Uh So fig jam mirror and Zoom White board and they all allow uh all the participants to contribute to the white board. Yes. And so they, they, they, they, they, they all allow that feature uh with the caveat that um they have the all three have the basic white board functionality, but they also serve, they have some distinct reasons why you might want to use one over the other. So just picking on mirror for an example um from my perspective, mirror has a very steep learning curve. And so as a trainer, I probably would not use mirror in a training where um folks haven’t been together before, they haven’t used a tool before. But if you’re using it over the long term and you’re able to train people on some basic things. So use your team over time. Then mirror is a, is a great tool for that. Um Fig Jam and Zoom white board are a bit more intuitive. And so depending on your audience, you wanna take those things into consideration if you’re using the Zoom whiteboard, which I’ve, I’ve never, I’ve never used. Um Are you just collaborating with your, your, your mouse? Is that how you, is that how you contribute to the whiteboard, your mouse, your stylus or if um I believe there are just like with Google Jam Board, you know, the, the way that Google Jam Board works is, you know, you, you can drag and drop text boxes and type in the boxes. Um And so that, that’s one option as well. Jason, I’m not sure if you have if you have familiarity with this or have other thoughts about the use. Yeah. Iii I believe uh yeah, you folks would drag their mouse and they can type in and uh it has a lot of parallel features to uh to jam board and, and, and neural although lighter for sure on that front. OK. Cool. Right. It’s your turn Jason. Yeah. So the, the next one that comes to mind is uh Minimus launcher. So it’s minimis. And what it is is that it’s an alternative launch screen for your phone. Now it’s uh launched initially in Android. And uh I believe the the iphone version is now out as well. And what it is is that it helps you get control over uh addictive apps. So if folks are finding that, you know, they are um in a loop or cycle of, you know, constantly checking their phone or things, you know, this is one of those apps that can help uh uh make a dent in trying to break that cycle a little bit. And so what it does is that it actually limits and changes uh your initial kind of phone screen uh and gives you a primary access to, you know, the apps that you need for, for work or basic functions. But if you do want to access uh something, you know, like social media um that it will prompt you and ask you, you know, are, are you sure that you would like to do this and you click, you know, yes. And then it will actually prompt you again, like, you know, you’re absolutely sure. And then, you know, you go to another screen and then they’ll say, OK, now give a rationale as to why, you know, you would like to, you know, check your, your social media. So it puts, you know, additional barriers up. Uh and then when you do move through it, it’ll allocate you 15 minutes uh to, you know, time box it so that you’re not necessarily stuck in that loop of, you know, looking down the screen and then you know, looking back up and like, oh my gosh, you know, an hour has passed. Uh so, uh really um uh a tool that can help regulate, you know, some of the uh the, the instincts that, you know, may be triggered around, you know, some of the uh those addictive algorithms that keep on feeding content that, that may keep us hooked to a phone and social media. This is like a uh a mother looking over your shoulder or you know, your own, your own conscience being, being uh awakened. Are you, are you absolutely sure. And then, and then you get a time limit even when you’re absolutely positively 100% sure. You, then there’s still a time limit. Absolutely. That Minimus Minimus launcher. Yes. OK. Cool. These are, these are really fascinating. Um I mean, so there are tools that can help us. We just, you know, we need to be conscious uh Miko, this is right in your right uh right. Aligned with your practice. We just need to be conscious about our uh or intentional and conscious about our desire to be, be uh be more productive, be less distracted. I mean, you know, you, you’re the mindful. Absolutely. I think the underlying thing here, both personally and professionally is uh being clear about what your overall intention is and what is the outcome that you’re driving for. So, being clear about those things is gonna number one help you determine which tools are the best tools for you to use right now. And as I mentioned before, sometimes the latest high tech tool isn’t the best tool. There’s the, there’s the, you know, the the mid tech and low tech also option exactly. Going to the phone. Exactly. So those are options as well. The other consideration is to consider that not only is it intention and clarity about outcome important for the reasons I just stated, but also because you have to remember that particularly for for profit entities, a lot of companies don’t necessarily always have your best interest in mind. What I mean by that is that they have to generate um time on screen. Um They have to sell ads and so their incentive is slightly different. Yes, maybe they want to provide a useful product, but they want you to use that product in a certain way or for a certain amount of time so that they can increase the share of the revenue or profit that they’re making. And so when you are aware of that, um it it becomes easier for you to identify the ways in which you might want to um recapture your time and recapture your attention using something like what um Jason just shared in terms of the Minimus stauncher. It’s time for a break. Donor box open up new cashless in person donation opportunities with donor box live kiosk. The smart way to accept cashless donations anywhere, anytime picture this a cash free on site giving solution that effortlessly collects donations from credit cards, debit cards and digital wallets. No team and member required. Plus your donation data is automatically synced with your donor box account. No manual data entry or errors make giving a bre and focus on what matters your cause. Try donor box live kiosk and revolutionize the way you collect donations. Visit donor box.org to learn more. It’s time for Tony’s take two. Thank you, Kate. In the gym. I like to go to the gym and do my work. I work out on the ellipse elliptical and then I go on the floor. I do a bunch of planks. I have to get some upper body work in. I’m not, I haven’t done that yet. I like to, I like to just get the work done, you know, take my time not rushing, but I like to get through the work. And it’s my uh kind of, you know, it’s my time. Theres a guy I know more about this guy’s life. I’ve learned over the past many months that his wife had a stent when they were on vacation in Florida. Uh And the surgeon said it’s a good thing, you’re not in the Caribbean because the medical care wouldn’t be as good. And you, you, she’d end up with an infection. She had to have a stent. She was have a suffering shortness of breath in Florida. It was, I know where it was, it was Miami. They were in Miami. It’s a good thing. They weren’t in the Caribbean. The surgeon says the guy’s boat, he’s having motor problems with his boat. Now, his boat is leaking oil and he’s got a, you know, this guy with the boat and the, and the, the, the, the wife needs a stent. The boat needs, uh, uh, a repair to the, to the oil line. Um, he went to an air show last week. Uh Cherry Point is a local uh the, well, not that low but it’s within a half an hour or something. It’s a Marine Corps Air station. He went to the Cherry Point Air show. I had over Memorial Day. The, the Blue Angels were there. I heard all about the show, like the guy was narrating the show but he, but he’s not even talking to me. He’s talking to somebody else. But, you know, it’s a community gym. It’s not that big. It’s certainly adequate, but it’s not huge. It’s not a 10,000 square foot gym. So, you know, you overhear people. So I, I hear him, you know, I like I got the narration to the Blue Angels Air show, you know, personalized uh to for us in the gym. So this chatty guy. But are you the chatty guy? Oh, don’t be the chatty person, the guy or gal don’t be the chatty person, you know, I don’t know. I’m not watching who he’s talking to. So, I don’t know if they’re suffering or they’re, they’re maybe just, um, you know, being polite, uh, you know, condescending a little bit but he gets his, he gets his oratory out. I mean, he’s, uh, sh, don’t be the chatty person in the gym to do your work just, you know, it’s nice to say hi. That’s different. But, you know, you don’t need to narrate the air show for everybody who didn’t get to go over Memorial Day weekend and the, and the motor with the, the oil line with the testing and you use a soapy water to, to spray it on the line to find the leak. And I know more about motor boat mechanics now than, uh, uh, than I’ve known in my entire life. I’ve learned in the past couple of weeks. I got, I got a short course in, in, uh, outboard motor maintenance and mechanics and, and troubleshooting chatty guy. Don’t be the chatty person in the gym. Just do your work. Just do your work. That’s Tony’s take two. It’s exhausting. It’s just recounting. It’s exhausting. Eight. You meet some of the funniest people in your gym first with the birthday guy and now with the motor guy. Yeah, Tim. Tim was sad with the birthday. This is a different guy. This is not Tim. Yeah, I don’t know the characters. Well, we’ve got Vuk but loves more time. Let’s return to Future Proof. Your nonprofit with apps. Tools and tactics with Jason Shin and Mikko Whitlock, Mio. I’m gonna ask you about one that, uh you have uh in your email signature. And we, we talked about this, uh I remember either last year or the year before, uh, but it, and I clicked on it as we’ve been, uh, you know, scheduling together, uh inbox when ready. So I’m, I’m, I’m imposing one on you. I know this is not on your list because we, we talked about it a couple of years ago or last year, but please reacquaint us with uh Inbox when ready. Ok. So Inbox prim ready is one of my all time favorite tools. And so I use Gmail. So for folks that are gmail users, this is a free plug in that you can um essentially install into Gmail and this only works on the, the desktop. So I wanna make, make sure people understand that if you’re dogging it from a computer or, or a web browser on a computer. But the idea is that um your inbox is if you’re able to hide your inbox after a certain amount of time or you are able to set it by default. So when you log in your main inbox is hidden, so that you aren’t sucked into the rabbit hole of sort of going down the hole responding to emails when your intention might be something completely different. As an example, maybe I was looking for the, the zoom link for today’s meeting and as opposed to logging in and seeing like, oh, I have, you know, 50 new emails. What if I don’t see any emails and I can just go to the search bar and, and type in Tony, uh you know, zoom link and I get directed directly to that email. I’m more likely to follow through on that and not be, be distracted. Um One of the other interesting things too is that we know from behavior change, being able to see metrics that actually show us, for example, in this case, how long we’ve been engaged in a certain behavior or how long we’ve actually been in our inbox for a day? Sometimes that awareness like, oh my God, I spent, you know, six hours like with my inbox open, that would be quite alarming, I imagine for a lot of people, right? And so those this particular tool allows you to see how much time you’re actually spending in your inbox in Gmail on a web browser. And that can sometimes be a tool that can be a catalyst um to help you shift um behavior if your desire is to actually spend less time um in your inbox. Now, the the the beautiful thing since this tool has come out, both Gmail and uh Microsoft outlook, which are the two, I think I would say most of those are the biggest sort of email providers in terms of the organizational space. Um they have introduced new tools including, you know, the ability to be able to snooze your inbox, you know, to be able to temporarily pause, you know, emails coming in for a certain period of time. And there’s a host of other ways in which you can um you know, manage your time in your inbox. Um that are something that you can use to supplement or to actually replace Inbox when ready. But I’ve been using Inbox when ready for so long. And it’s, it works for me that it is one of my, my go tos the idea of pausing your inbox. I mean, so it seems so simple and, but I never thought of it until you said it. I mean, the there’s just this simple functionality like maybe I just don’t wanna, you know. Uh Yeah, I, I just don’t need to see the incoming messages. Um There’s another simple thing that I, I think it was you guys who shared it with me years ago, which I did, which is just turn off the notifications. The little, well, I use apple mail. So for me, it’s a little, it’s a little red dot red circle that has a number of unread messages in it, just turn that off, just turn that feature off. Just let the email sit there in the in the dock for me. It’s a tool bar for others and you don’t have to be prompted uh that you’ve got 15 unread messages. It’s, it, it’s anxiety producing. Yes. And so one of the things that folks don’t re realize so, and I think um this happens in both the Android and the Apple device ecosystem where when you’re downloading a device or downloading an app, sometimes we’re still in a hurry that we don’t read the pop ups that let us know what’s happening. And so we just click. Yes. Agree. Yes, agree. Yes, agree. Because we want to get to the app. And what happens is generally what’s happening is what you’re saying. Yes to and agreeing to is in addition to sharing all your good data, you’re saying yes to um all the notification, right? Not a badge, the alerts, the badges and so on and so forth, right? And so the particular feature that you’re talking about are the badge notifications. Uh where for it could be for email, it could be for Facebook or Instagram. It shows you not only the app, but it shows you a little red dot on the apple device, for example, oh, you have, you know, 2000 unread Facebook messages or you have, you know, three new likes on, on Instagram and for many people, that’s a source of background stress every time you pick up your phone. And so I recommend for folks that unless there is a compelling reason, like uh like you’re some kind of first responder and you’re doing important work where you have to be, you have to know the minute someone is sending you one of those things because if you don’t, someone’s gonna die for most people. That’s not the case. Right. So, um, turn that off. Turn that shit off. Absolutely. And don’t be, you’re absolutely right. When you download a new app, they ask all those questions. Also. Location, share your location. Why do you have to share your location? Yeah. Right. Google Maps needs my location. That really, that’s about it. My bank does not need my location. I can deposit the check without it knowing what my uh well, you know what my IP address is or what or my, my my coordinates are. Um Yeah, don’t. Right. Mindful, mindful. You’re the mindful techie. That’s right. It’s, you’re aptly named as well. There’s a lot of aptly named things here. Um Yeah. Right. They, when you get the app, you’re anxious to get to the thing, take a breath and read. You know, you don’t, you don’t need all the notifications and alerts. OK. Let’s stick with you since I imposed one on you. Uh I took your, I, I took your uh your, your, your chance. So you had one teed up. Go ahead. All right. So I, I’m gonna share one I think is a favorite for both me and, and Jason so much so that I think we probably share this in virtually every presentation because it’s just such a phenomenal school. So it’s called Toby and it is a browser tab organization uh plug in. Uh That is, I think pretty much cross browser at this point. I know that you can get it in Chrome and Firefox and probably a couple of the other browsers. But the idea here is um you know, we routinely have meetings where, you know, you have to open a gazillion tabs that are relevant to that particular meeting. And for many people, you may be doing that manually. So you have a meeting with Tony and you’re like, OK, I gotta pull up the podcast together. I gotta pull up this, gotta pull up this. And so you’re, you’re, you’re trying to scurry around as the meeting is starting to open up all these different tabs. What uh Toby allows you to do is to essentially create collections of tabs and you just press one button. Um and you, you, it opens all those tabs automatically, right? And one of the interesting things with Toby in comparison to the built in bookmarks and, and mini browsers is that then you can share the collections with your team. So if, if you’re working on it, let’s say, you know, Tony, you have a, a humongous podcast staff and you have a central by set of tabs that you all have opened during your, your planning meetings. You could or someone else on your team could create a Toby collection, share that with everyone and then everyone has the same access to the same collection and it sort of standardized and people didn’t have the ability to create their own collections or customize uh on their own. And so it’s one of those things that it’s one of those going back to what Jason was saying about 1% better, you know, that ability to be able to save those few moments and to save that uh mental stress that we go through at the start of a meeting um adds up tremendously over time. This is another good, yeah, good point you made before to accumulated accumulated like background stress, even the the anxiety of seeing the badge with the 2000 Facebook uh messages unread or something, you know, just uh I didn’t get to, oh I’m so far behind and then, and then it becomes pointless to do it. But the number keeps increasing but you’re so far behind, you may as well let it go, but it’s causing more anxiety, more agita. All right. All right, Toby. So, so that falls under like uh um browser, browser tab management, browser management. OK. Toby Jason. Yeah. So the another tool that want to chat about is uh this may be something that is available for folks that they may not necessarily be using, but they just want to draw it to folks awareness. So, um for those that may already have an 03 65 subscription, you know, through their organization is that uh being copilot um has a commercial data protection uh uh flipped on. And so what that means is that, you know, when, when folks are using, you know, some A I tools that, you know, there, there may be a concern that it is the data is being used to train the model or that, you know, you don’t necessarily know that it’s going to end up, you know, being uh you know, pop up somewhere else, you know, down the road is that uh the, the commercial data protection feature, uh actually uh assures you that it won’t be used for the training of the model and it’ll be contained uh when, when you’re using it. And so, uh if you are using it, you just have to be logged in to your 03 65 account at bing.com/chat and then there’ll be a little green um shield on the top right hand corner that says protected. And so, you know, you’re, you’re able to use that and uh resting assured that, you know, the data that you’re, you’re putting in there isn’t being used to train a language model uh for the folks who are using um the generative A I function. So, uh so what it is is that, you know, it’s uh you know, the similar to things like, you know, GP T or uh you know, Google’s Gemini in that for this particular instance, uh that uh it’s uh the, the little green uh icon in the in the top. Right, assures you that, uh, uh, it, uh, stays contained to your organization. Ok. So it’s like firewalled off from, from, uh, the, the generative A I learning. Mhm. Yeah. And, and as a general tip for, for folks as well as for the other, uh, uh, you know, tools that they may be using around generative A I is, you know, to make sure that you’re checking the settings and the fine print uh that, you know, if you are using, you know, one of the free uh options as well, that there may be uh settings in there to uh turn off the um use the data for training data models uh uh for folks that may uh like to uh be a little bit more secure uh with uh their privacy and what they may be um putting out there. There’s also a concern uh when folks give one of these uh tools, their own data to learn, like, you know, I want you to write a letter to a donor in, in my tone and you, so you upload, you upload to the using in your prompts some of your own letters. And I want it in my tone with, you know, you, you need to be very aware of what you’re, what you’re providing for the learning act because I don’t know that it’s only, it’s keeping, it’s keeping your data only to this conversation, this, this uh this purpose that we’re we’re going back and forth about and whether it uses, it uses that your data that would otherwise be proprietary to you because it’s, it’s your letters, uh for some larger purpose. Absolutely. And that’s something to be aware of when, uh flipping on some of the features in some of these programs where, you know, they, you may be prompted you to flip on an A I feature or, or something, but, you know, in the fine print uh or even not so fine print, you know, it may say something along the line of like, you know, uh this will submit your data to a third party, uh you know, just giving you a heads up, but there’s a lot of additional subtext there where it’s like, OK, well, after it’s submitted to the third party, you know, what happens like is this going to be used to train, you know, the language model, you know, is, you know, this going to pop up, you know, somewhere potentially in the future or is, you know, or is it going to stay contained uh and not used to train the model? So, uh you know, those, those are questions that are worth asking, you know, as um you know, more and more A I features, you know, pop up along the way as well. What what third parties, third party is in the world, I think the broader point that I would make here too. And this is with social media. This is with um anything you post on a website. Um we technology has evolved to a point where essentially there’s a forever memory, right? Even if you take stuff down, it’s still there, still findable. It’s somewhere. Right? And so I always, particularly with younger folks, you know, say, don’t share anything, don’t text anything, don’t post anything on Facebook and Twitter and Instagram and Snapchat and tiktok that you would be embarrassed to see on the news, right? If you’re embarrassed to see it on the news, then don’t post it. Don’t, don’t share it very wise, sage, sage advice from the mindful techie. Uh Let’s do uh let’s do one, each 11 more each Miko. All right. So uh we know that the amount of information the V information has continued to increase and it’s virtually impossible for us from a human perspective to keep up with all of that. And so in this case, one of the ways that A I can be very powerful is by actually helping us to summarize, you know, voluminous um documents, videos and so on. And so one of the tools that we shared during our session is uh a plug in for chrome that’s called youtube summary with chat GP T and cloud. So chat GP T and Cloud are both um two different types of um A I tools. And one of the interesting things about this youtube summary with chat GP T and cloud tool is that it allows you to summarize youtube videos, web articles, and PDF documents. And so I don’t know about you, but I don’t have time to watch your two hour training on fill in the blank topic. Maybe I just want, you know, just give me the cliff notes, give me the, the, the bullet points and let me dive deeper on the points that are most relevant to my particular question or my particular curiosity at that moment, this particular tool you just plug in a URL and it gives you a summary um to help you to really focus your time and maybe you decide or determine that, hey baby, this video doesn’t have what I need or maybe it does. I want to look at the last third of this and actually dive a little bit deeper. Um but it can be a powerful tool to save you um lots of time um with particularly with lengthy videos or lengthy documents, say the name of the, the tools again. So this is a long one. So it’s called youtube Summary. It’s all one. Yeah, I’m just reading the title that they gave us. So youtube Summary with Chat GP T and Cloud. OK. Yes. Thank you, Jason. Yeah. The, the, the last one I’ll share is uh it’s more of a mental model and a tool and uh you know, to, to really think about things on a two by two grid. And so the the, the model is called the, it’s also known as the Eisenhower matrix. And so when thinking about, you know, the various tasks that one has to do, and, you know, we, we shared a whole bunch of tools that help, you know, automate and speed up things. But it’s also to look at the tasks themselves and really take a look at, you know, what’s, what’s urgent and what’s important. And so on the two by two grid on one access, um you know, you would have an important and not important and on the on the other access, you would have urgent and not urgent. And ideally, you know, you um you’re spending your time on the not urgent and important things because you know, that’s where you can make, you know, a lot of your long term impact and you know, the things that pop up that are urgent and important, you know, those uh is uh you know, important for you and your organization and when you make short term impact and when you think about, you know, things that are not urgent and not important, you kind of have to ask, you know, well, why are we doing them? And so we want to make sure that these tools aren’t automating things necessarily that, you know, if it’s not urgent and not important, you know, automating but not, not urgent and not important is, you know, spending more time and resources to do things that aren’t important and urgent then you know, those things just need to be eliminated. So those can be thinking of the like the the email inbox is a very good example of non urgent, not important. Most of it obviously there are exceptions but most of email is non urgent and non and non important. Mhm. Yeah. So things like starting to, you know, junk mail, you know, checking through, you know, a lot of, you know, social media, you know, they can be, you know, distractions and time wasters and, and then uh you know, and then the other quadrant being the urgent and not important uh part where, you know, that’s where a lot of the tools can help, you know, automate or help, you know, kind of you’re delegating it out to, you know, the, the tool to help accelerate, you know, some of that and, and so, you know, I think this is a really great conceptual framework to as folks are looking at, you know, their tasks and matching it up to the tools that they’re, they’re using as well. And I know that, you know, Miko speak to, speaks to this, you know, really knowledgeably in his trainings and um around uh how, you know, it can be used effectively and uh I’ll throw it over to Miko as well. Uh uh If there’s any additional things to add on this front. Yeah. So Jason, I, I think you’re spot on, I think that the key here. So in the context of the tech tools that we’re talking about, um we shouldn’t just be using the tools just for the sake of using them. Like you want to be clear about the purpose and at least in terms of the organizational context, and I think it’s critically important as you think about urgent versus important. Also thinking about, you know, what are the resources that we have? What’s the capacity that we have? You know, when I was communicating director, one of the the the the the common pieces of wisdom from some folks when social media was emerging is that oh, you gotta be on all the platforms. Well, that was impossible. We didn’t have the staff of the resources nor did it make sense because our audience wasn’t on all the platforms. So asking yourself the question, what resources do I have? What time and capacity do we have? And you know, what goal are we trying to achieve and what’s good enough for now versus what we can build toward for later or perhaps what we can eliminate all together because it’s simply not relevant, even though everyone else is doing it or the conventional wisdom says we should be doing this or be on this particular uh platform. Um And I think we can apply that to A I right now, right? So um I am of the mind that it’s not an all or nothing, right? Not every organization needs to be using every single type of A I tool out there. Um It’s simply inappropriate in some cases and some cases that actually can be counter productive. So you wanna be clear about what outcome you’re trying to get to. Um And how the tool can support you and help you. Not how the tool can sort of replace you being a critical thinker and that’s, that’s actively involved in the process. Yeah, loss of creativity is, is my biggest concern about artificial intelligence. Use that, that we’re, we, we could see some of the most creative things that we do. Uh And listeners have heard me talk about this with uh we had an A I A couple of A I panels. Um The one, the first one was with um Arua Bruce and George Weiner and Beth Cantor and Alison. Fine. And we kinda uh I aired my uh my concerns there in, in more detail just about giving away the most creative things that we do. And over time us becoming less creative, less creative thinkers, less thoughtful thinkers or less critical thinkers. Um All right. So why don’t we leave it there? And, and Jason, I’m just going to reiterate it’s the Eisenhower Matrix, which I’ve followed for years and I try to think that way, but I don’t, I don’t do it routinely but that, that’s uh that two by two that you were describing is uh the Eisenhower Matrix and you said it, I’m just reiterating it for folks. So that, because it is a, it’s a very, it’s a very sensible way of, of planning. And Miko to your point earlier, uh, you know, it’s, it, it, it’s been around for generations. Old tools can still be valuable. Absolutely. And I, I would offer to your audience, Tony if I’ve reworked this, um, for the mission driven context. And I’ve, and I’ve annotated it. Um, and I’ve, I’ve given sort of a road map of how you work through this in a practical way. So if folks are interested in that, they can email me or, you know, we can give you a link to put in the show notes or however, it makes sense. But folks want access to that annotated version. Ok. The annotated version of the Eisenhower Matrix. Ok. All right. That’s Miko Marquette Whitlock. He’s a mindful techie. You’ll find him on linkedin and his practice is at mindful techie.com. Jason Sim Chief Digital Officer at the Canadian Center for nonprofit digital resilience. He’s also on linkedin and the center is at CCNDR dot C A. Jason Miko. Thank you. Thanks very much. Real pleasure each year. Thank you for sharing. Thank you. Thanks for having us. Next week, we’ll take a hiatus from 24 NTC with Gen Z career challenge. If you missed any part of this weeks show, I beseech you find it at Tony martignetti.com. You’re gonna be interested in the Gen Z you, you’re Gen Z. That’s me, Gen Z career challenge. We’ll see if it holds true for you were sponsored by virtuous. Virtuous gives you the nonprofit CRM fundraising volunteer and marketing tools. You need to create more responsive donor experiences and grow, giving, virtuous.org and by donor box, outdated donation forms blocking your support, generosity. Donor box, fast, flexible and friendly fundraising forms for your nonprofit donor. Box.org. Our creative producer is Claire Meyerhoff. I’m your associate producer, Kate Martignetti. This show, social media is by Susan Chavez. Mark Silverman is our web guy and this music is by Scott Stein. Thank you for that affirmation. Scotty be with us next week for nonprofit radio. Big nonprofit ideas for the other 95% go out and be great.

Nonprofit Radio for June 5, 2023: Artificial Intelligence For Nonprofits

 

Afua Bruce, Allison Fine, Beth Kanter & George WeinerArtificial Intelligence For Nonprofits

We take a break from our #23NTC coverage, as an esteemed, tech-savvy panel considers the opportunities, downsides, potential risks, and leadership responsibilities around the use of artificial intelligence by nonprofits. They’re Afua Bruce (ANB Advisory Group LLC); Allison Fine (every.org); Beth Kanter (BethKanter.org); and George Weiner (Whole Whale).

Listen to the podcast

Get Nonprofit Radio insider alerts!

I love our sponsor!

Donorbox: Powerful fundraising features made refreshingly easy.

 

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

Transcript for 643_tony_martignetti_nonprofit_radio_20230605.mp3

Processed on: 2023-06-02T16:46:29.571Z
S3 bucket containing transcription results: transcript.results
Link to bucket: s3.console.aws.amazon.com/s3/buckets/transcript.results
Path to JSON: 2023…06…643_tony_martignetti_nonprofit_radio_20230605.mp3.774321359.json
Path to text: transcripts/2023/06/643_tony_martignetti_nonprofit_radio_20230605.txt

[00:04:19.33] spk_0:
And welcome to tony-martignetti non profit radio. Big non profit ideas for the other 95%. I’m your aptly named host of your favorite abdominal podcast. Oh, I’m glad you’re with me, but you’d get slapped with a diagnosis of algorithm a phobia. If you said you feared listening to this week’s show Artificial Intelligence for nonprofits, we take a break from our 23 NTC coverage as an esteemed tech Savvy panel considers the opportunities downsides potential risks and leadership responsibilities around the use of artificial intelligence by nonprofits. There are fewer Bruce at A N B advisory group LLC Allison. Fine at every dot org, Beth Kanter, Beth Kanter dot org and George Weiner at Whole Whale on Tony’s take to a give butter webinar. We’re sponsored by donor box with intuitive fundraising software from donor box. Your donors give four times faster helping you help others. Donor box dot org. Here is artificial intelligence for nonprofits in November 2022. Chat GPT was released by the company open AI they’re more powerful, maybe Smarter GPT four was released just four months later in March. This year. The technology is moving fast and there are lots of other platforms like Microsoft’s as your AI I guess the sky’s the limit. There’s Google’s help me, right? And Dolly also by open AI creates images. So artificial intelligence can chat and converse answer questions. Do search, draw and illustrate and write. There are also apps that compose music, create video and coding computer languages. A team at UT Austin claims their AI can translate brain activity into words that is read minds and I’m probably leaving things out what’s in it for nonprofits. What are we risking? Where are we headed? These are the questions for our esteemed panel. Bruce is a leading public interest technologist who works at the intersection of technology policy and society. She’s principal of A N B alpha, November, Bravo Advisory group LLC, a consulting firm that supports organizations developing, implementing or funding responsible data and technology. She’s on Twitter at underscore Bruce Alison. Fine is a force in the realm of technology for social good as president of every dot org. She heads a movement of generosity and philanthropy that ignites a profound transformation in communities. You’ll find Allison Fine on linkedin. Beth Kanter is a recognized thought leader and trainer in digital transformation and well being in the nonprofit workplace. She was named one of the most influential women in technology by Fast Company and is a recipient of the N 10 lifetime achievement award. She’s at Beth Kanter dot org. George wegner is CEO of Whole Whale, a social impact digital agency. The company is at whole whale dot com and George’s on linkedin. Welcome all our esteemed panelists. Thanks, welcome to non profit radio. We’re gonna start just big picture. Uh I’d like to start with you just what are you thinking about artificial intelligence?

[00:05:30.10] spk_1:
That is a very big picture question. What am I thinking about artificial intelligence? I think um there are lots of things to consider, I think first is um all of the hype, right? We have heard article after article whether or not we wanted to, I’m sure about the promises and the potential of chat GPT specifically generative AI more broadly. Um Well, uh you think about some of the image based AI solutions, generative AI solutions that are out there that have been in the headlines recently, of course, as someone who’s started their career off as a software engineer where AI has been around for a while. And so sure, generative AI is a different type of application of AI, but it is building on something that has been both out in the world developed for a while. Pre chat GPT most organizations or several companies just embedded AI into the tools you already use, whether it’s gram early or something, I’m betting ai into their solutions. So what I’m thinking about now is how do we help organizations navigate through all of the hype and figure out what’s real, what’s not real, um recognize where they should lean in, recognize where they can take a pause before leaning in and then of course, underlying it all, how do we think about access, how do we think about equity and how do we think about how embracing AI will change or evolve jobs?

[00:05:59.52] spk_0:
And these just define generative ai for us? So everybody knows what, what we’re referring to and we’re all, we’re all on the same platform.

[00:06:08.78] spk_1:
Sure. So, generative AI is where it is essentially looking at a large model. Chat gps specifically uses a large language models. So lots of text and looks at that and then gives you what is statistically sort of the next uh most reasonable or probable word based on a prompt that you give it. So developing the recommendations as you go along,

[00:06:35.79] spk_0:
Allison, please. Yes, big picture.

[00:08:08.00] spk_2:
Well, a few adjust said it beautifully that this isn’t a brand new idea, although we are in the next chapter in terms of advanced digital technology. I think organizations tony need to get their arms around this right now. Ai before AI gets its arms around them and their organizations, Beth and I started to look at AI about five years ago with support from the Gates Foundation and the promise of it is that AI can eat up the road tasks that are sucking the lifeblood out of so many nonprofits, staffers, they are drowning in administrative um tasks and functions and requirements that AI can do very well in fundraising. It might be researching prospects, taking the first cut, communications with donors not sending it out, just taking the first cut, helping with workflow, helping with coordination. Um And the responsibility is for organizational leaders, not line people and not tech people, but organizational leaders to figure out where the sweet spot is what we call co body between what humans can do and need to do. Connect with people, solve problems, build relationships and what we want the tech to do mainly rote tasks right now. So understanding ai well enough tony to figure out where it can um solve what we call exquisite pain points and how to make that balance between humans and the technology is the foremost task for organizations right now.

[00:08:32.35] spk_0:
Death.

[00:10:18.39] spk_3:
Great. So Alison and Noah said it so well. So I’m just going to actually build on it but go into a specific area that where that is kind of the intersection between ai and workplace well being and kind of the question, you know, well, ai fix our work. Um can it transform like the work experience from being exhausting and overwhelming to one that brings more joy that allows us to get things done efficiently but also to free up space to dream into plan? Um And or is it going to be a dystopian future? I don’t think so. Um And by dystopian related to jobs I’m talking about kind of, you know, we’ll get rid of our jobs like who, who will lose out. And um just a week or two ago, the World Economic Forum released a report that predicts that nearly 25% of all jobs will change because of generative ai and it’ll have a, you know, a pronounced impact by displacing and automating many job functions um that involve writing, communicating and coordinating, which is, which are the things that chat GPT can do so much better than previous models. Um But it will also create the need for new jobs, right? I heard a new job description recent, a prompt engineer. So somebody who knows how to ask the types of questions of chat GPT to get the right and most accurate and high quality responses. And I think I’m building on what Alison said about co body. I think this is the future where AI and humans are complementary, they’re not in conflict and it really provides a leadership opportunity to redesign our jobs and to rethink and reengineer workflows so that we enable people to focus on the parts of the work that humans are particularly well suited for. Like relationship building, decision making, empathy, creativity, and problem solving. And again, letting the machines do what they do best but always having the humans be in charge. And again, that’s why Allison and I always talk about this as a leadership issue. Not a technical problem.

[00:10:50.46] spk_0:
Leadership, right? Okay, we’ll get the leader responsibilities. George, what are you thinking about ai

[00:11:30.47] spk_4:
hard to add such a complete start here. But I would say that just because this is a fad doesn’t mean that’s not also a foundational shift and the way we’re gonna need to do work and how leaders are gonna have to respond. I also just want to say like right now, if you’re listening to this podcast, because your boss forwarded it to you saying we gotta get on this. I understand the stress you’re under. It is really tough, I think right now to be in the operational layer of a nonprofit doing today’s work expecting to make tomorrow’s change. So stick with us. We appreciate you listening.

[00:12:03.93] spk_0:
Thank you, George. Like happening to the co host role, which uh which doesn’t exist so careful care. Watch your step. Let’s stay with you, George, you and I have chatted a lot about this on linkedin. Uh use cases. What, what uh what are you seeing your clients doing with ai or what are you, what are you advising that they explore as their um as they’re also managing the stresses that you just mentioned?

[00:13:00.00] spk_4:
Well, right now we’re actively custom building AI is based on the data, voice and purpose of organizations that we work with. One of the concerns that I have is that when you wander onto a blank slate tool, like open ai Anthropic Bard, you name it, you’re getting the generic average as of who pointed out the generic average of that large language model which means you’re going to come off being generic. And so we’re a little concerned about that and are trying to focus our weight on how you tune your prompt engineering toward the purpose of the organization. We’ve already mentioned, grant writing, reporting applications, emails, appeals, customization, social post, blog, post editing. It is all there if you’re using it the right way, I think.

[00:13:22.32] spk_0:
And that gets to the, the idea of the prompt engineer to that, that Beth mentioned what, what you’re so avoiding that generic average with sophisticated prompts. George.

[00:13:47.96] spk_4:
Absolutely. Yeah. I mean, we jokingly call it the great jacket problem where I showed up to a conference and I was wearing the same gray jacket as another presenter and I was like, we both walked into a store and we both thought that the beautiful gray jacket we put on was unique and that we would be seen as such for picking out such a great jacket. When in fact, when you go in to a generic store and get a generic thing, you get a generic output. And my concern is that without that leadership presence saying, hey, here’s how we should be using this with our brand tone voice and purpose that every single new hire out of college. We’re running into the social media game. Beth has already played this game, Allison, we’ve already played this game where we handed the intern the Twitter account because they used it in college. We’re gonna just replay that again and I’d rather just skip that chapter

[00:14:22.42] spk_0:
and that we’re going to get into this too. That, that generic average also has biases and misinformation. False. Well, they’re not false, false information. Um How about you? What are you seeing your clients? What are you advising usage wise?

[00:16:24.89] spk_1:
A couple of things. So, first, I think Allison touched on this as well is that you can sort of take a breath. You don’t have to embrace everything all the time for everything. I know it can seem right now that everyone’s talking about generative ai and how it’s going to change your world. Um But you can sort of take a breath because um as I think Allison and Beth both mentioned, right, the technology is only good if it’s working for our mission, if it’s working for organizations. So really taking the time um as a leadership team to really be clear on what you want to do, what differentiates your organization and make sure your staff is all aligned on. That is the first thing that um advise organizations to do. The second is to think about then the use of AI both to help your organization function and deliver it services out in the world. But then also to think about how it impacts your staff. So I think sometimes we can get caught up in, we’re going to use A I hear it’s going to like, you know, we’ll be able to fix all of our external messaging will be able to produce more reports, will be able to produce more um grant applications, all good, all valid. But remember also, your staff has to learn how to use it and staff has to learn how to make the prompts. Your staff also has work internally that they are doing that. Perhaps AI could be used to help speed up the their task and free up their time and their brain space to lean into what humans do best, which is some of the relationships and having empathy. So thinking also not just about how AI can help you maybe generate more culturally appropriated images for different campaigns around the world or how generative AI can help you fine tune some messaging or how generative AI can help you better sort of segment and deliver services to, to your communities that you serve. But also how you can use AI to do things like help with notes, help with creating agendas, help with transcripts and more what are some of the internal things to really support your staff that you can, you can apply AI towards

[00:16:48.76] spk_0:
Alison that’s leading right to some of those rote tasks that that you mentioned. Um So I’m gonna put it to you in, in, in terms of uh Kirsten Hill on linkedin asked, what’s the best way for a busy nonprofit leader to use AI to maximize their limited time?

[00:18:49.78] spk_2:
So people are looking for some magic solution here, tony and we hate to disappoint them, but AI is not magic fairy dust to be sprinkled all over the organization. Uh This is a profound shift in how work is done. It is not a new word processing, you know, software AI is going to be doing tasks that only people could do until just now. Right? Any other year going back, um people would have had to be uh screening resumes or writing those first drafts, um or, you know, coordinating internally. And now basically the box are capable of doing it, but just because they’re capable of doing it doesn’t mean that you should, you know, unleash the box on your organization. Our friend Nick Hamlin at globalgiving, a data scientist said AI is hot sauce, not catch up a little bit. Goes a long way. We Beth and I have been cautioning people to step very slowly and carefully into this space because you are affecting your people internally and your people externally, right? If a social service agency has always had somebody answering questions of, when are we open? And what am I eligible for? And when can I see somebody? And now a chatbot is doing that, tony, you have to be really careful that one, the chatbot is doing its job well and two that the people outside don’t feel so distant from that organization that it’s not the same place anymore. So our recommendation is, that’s

[00:18:52.67] spk_0:
a, that’s a potential. I mean, it could, I guess mishandled this could change the culture of an

[00:19:36.78] spk_2:
organization. Absolutely. If you are on the outside and you’re accustomed to talking to Sally, who at the front desk and all of a sudden the organization says to you, your first step has to be talking to this chat bot online. Instead the organization has solved perhaps a staff issue of having to answer all these questions all at the same time. But it’s made the interaction with those clients and constituents much worse. So we need to first identify what is the pain point we’re trying to solve with AI is ai the best solution for doing that and then to step carefully and and and keep asking both staff and constituents, how is this making you feel? Right? Do you still feel like you have agency here? Do you still feel like you are connected to people internally and externally and to grow it from there? There is no rush to introduce AI in everything that you do all at once. There is a rush to understand what the impact of automation is on your organization.

[00:21:00.42] spk_0:
It’s time for a break. Stop the drop with donor box. Over 50,000 nonprofits in 96 countries use their online donation platform. Naturally, it’s four times faster, easy payment processing. There are no set up fees, no monthly fees, there’s no contract. How many of your potential donors drop off before they finish making the donation on your website. Stop the drop, stop that drop donor box helping you help others donor box dot org. Now back to Artificial Intelligence for nonprofits with fewer Bruce Allison. Fine Beth Kanter and George wegner. Beth, I see you taking copious notes. I think, I think there’s a lot you want to add.

[00:23:39.85] spk_3:
Oh, there’s so many good points made and I was taking a lot of notes because like nowhere to jump in. Um So a couple of things, uh George said, uh we, we did the social media thing and we turned it over to the intern. Let’s not do that again, but I’m not sure that’s gonna happen because with social media adoption, if we think back, uh you know, the dawn of social media started in 2003, it really wasn’t until six or seven years later. And I remember it quite distinctly when the chronicle, Phil apathy and organizations were really embracing it. There was a lot of skepticism because social media adoption was more of a personal thing because it started as the individual, it wasn’t immediately brought into the workplace. Um And I think chat GPT will be a little bit different because the benefit there is, you know, the sort of the allure of efficiency saving time, right? And or it can help us raise more money. So I think we might see it develop more quickly in the workplace and if nonprofit leaders are, are part do smart adoption, then there will also be the training uh required and the retraining and the re skilling. And I think for me, the most important thing about this is that it is going to change the nature of our work and that if you just let that happen, you’re missing an opportunity because we have a chance to really kind of accelerate workplace line learning, both, you know, formal and informal to, to re skill staff that in a way to embrace this, that’s not going to cause more stress and burnout. The other thing I was thinking about the great jacket and I love that um Metaphor George, I love it. Um In that, you know, if nonprofits are turning to and buying the $20 a month subscription for Chat GPT, they’re getting the Great Jacket version and missing out on the opportunity to really train it. But the other hand, if they’re just going without an organizational strategy, are they being trained in, are they put entering confidential information into Chat GPT? Are they using their critical thinking skills? Because we know that uh chat GPT can hallucinate and pick up crap? Right? Are they really, you know, are they, are they doing that? Like, are they just saying, write me a thank you letter for this donor versus write me a thank you note in the tone of in a conversational tone um that recognizes this donor, you know, quality blah, blah, blah, right? And um and then go back and forth and refine a draft. So, so there’s a piece of like um uh I guess technical literacy that has to be learned and that’s like the technical problem. But then there’s also this whole workplace learning and workplace um uh you know, reengineering of, of jobs and bringing in new jobs and different parts of descriptions that also need to take place as well. So we’ve got to prepare the organization’s culture uh to adopt this in a way that is ethical and responsible.

[00:24:07.24] spk_0:
George you feel any better.

[00:25:12.72] spk_4:
I’m not sure how I felt to begin with, but the uh the, the piece to add on as a nuance, there is not just the generic output but the normalization and ability for people to identify A I created content is going to explode. What does that mean if I were to show you a stock photo right now? Versus when I took on my phone, it would take you 0.5 seconds to be like, yep, stock photo, stock photo, stock photo. And we have all seen the appeals that go out with generic Happy Family with Sunset and background. And I think what’s going to happen is the text that is generated by folks that are using gray jacket G P T s is that your audience is going to see it, identify it and shut it down mentally. It’s like driving past that billboard or that banner ad. It’s going to be a wash. It may seem unique to you. But I think, uh, I think that’s another thing that we’re going to see happen. I just want folks

[00:25:13.82] spk_0:
to know, okay, I just want folks to know that that Great Jacket is a real story. You, you and you and another guy did show up with the exact same jacket

[00:25:21.64] spk_3:
at some point and 10 conference, wasn’t it in New Orleans?

[00:25:24.91] spk_4:
It was, it was a fundraising uh fundraising conference. And actually the other guy’s name was George. So there was two Georges to great jackets. I felt very um silly.

[00:25:38.76] spk_0:
Yeah.

[00:26:29.31] spk_2:
So, um the ultimate R oi Beth and I feel and we wrote about in the smart non profit is what we call the dividend of time that is to use AI to do those rote tasks that I talked about a few minutes ago in order to free up people to do human things. And George the opportunity isn’t we hope to send out more messages or to be, you know, continue down the transactional fundraising path. The opportunity is to use your time to get to know people and to tell them stories and to listen to them. So with or without A I organization stuck in that transactional hamster wheel tony for raising money. And if they can’t get out of that AI is definitely not going to help them. The opportunity here is to move that entire model into the past and say we’re going to create a future where AI gives us the time and space to be deeply relational with people. That’s the opportunity.

[00:27:17.67] spk_0:
Well, I’m gonna come to you in a moment and talk about how we can prevent the, this generic average, this gray jacket uh from taking over our culture. But Alice and I just want to remind you that when I had you and Beth on the show to talk about your book, The Smart non profit, I pushed back on the dividend of time because it feels like the same promise that technology has given us through the decades. And I’m not feeling any more time available now than I did before I had my, my smartphone or um whatever, whatever other technology I’ve adopted that was supposed to have yielded me, yielded me great, great time. I don’t, I don’t, I don’t feel any, any greater time.

[00:28:42.12] spk_2:
I don’t believe that that was the promise before. And certainly what we found with the last generation of digital Tech tony is that it made us always on and everything became very loud and very immediate. No question about it. And this next chapter in AI is not guaranteed to give us time. What we’re saying is there’s an opportunity to work differently and to create this time if leaders know how to use it. Well, that’s the big if, if we’re just going to sit back and said late, let’s ai supersize our transactional fundraising and send, send everybody 700 messages a day because that’s worked so well said very sarcastically then no, it is not going to make us any free up any time. But what we are saying is this technology has the capacity to do all of that work that is sucking up 30 40% of our time a day and we could be freed up. But only if we use it smartly and strategically,

[00:28:51.05] spk_0:
how about, you know, how we can help prevent these generic averages with their biases and marginalization of already marginalized voices. You know how and, and just from the fear of taking over the institutions, culture, how, what are the methods to prevent that?

[00:33:20.42] spk_1:
Um Sorry, I think I would start with an analogy that I’ve used before. That technology is not a naturally occurring resource. There’s no like river of technology that we just walked down to and scoop up and now we have technology and it immediately nourishes us to some of what Alison was just mentioning. Um in order to actually use AI effectively, it takes intentional management, it takes intentional decisions about how to use it when to use it and why to use it. And so that definitely applies when we think about how do we differentiate, differentiate ourselves even as we use AI and also how do we make sure that we then are being intentionally inclusive? Um I don’t know of any technology that just by happenstance has been inclusive. Um And so it requires intentional decisions. So some ways that bias can appear in generative ai systems are with some of the, the coding that is done inherently with some of the data sets that are used. Even with large language models, they reflect right now every on the internet. Um I know a lot of great people on the internet, there’s a lot of things on the internet that do not align with my values, um or even my actual lived experience. Um And so how do we then think about sort of combating that? So I think one, we’ve already touched on prompt engineering to make sure that we are asking it the things that we want to get back if you ask chat GPT, for example, um to describe what, what are risks with chat with generative AI will give you one list. You refine that prompt to include specifically what a risk with chat with generative ai including or specifically affecting women or people of color. It will give you a more refined response. Chat GPT a month ago. If you asked it, the doctor and nurse were arguing because he was late, who was late. It would tell you the doctor was late. He asked the same question but said because she was late, it would tell you um it was the nurse that was late, that now has changed because the people who are programming to GPT have manually made those changes. So as we think about how we can use it, it is through some of the software that we’re building on top of it, some of the plug ins that you decided to take advantage of, to not take advantage of how you might be able to use it on your own sort of proprietary information with the right parameters in place to keep it on your, keep it with your own data in ways that make sense for your organization there. Um I think it’s an opportunity for funders to fund the creation of new data sets or fund the creation some more responsible plug ins or fund um you know, new open source developments as well. So I think that’s an exciting play there. Um And then I think also there is an opportunity to use chat GPT or sorry, generative AI in ways that really do enable more representation. Um Working with someone who is um an advocate for women’s rights in India. We’re talking through ways that she could more quickly generate posters and informational materials using generative AI for both images and text for different places on the subcontinent that she couldn’t physically get to. Um And that she didn’t have talent on the ground to get to. That is different though I’ll say from the announcement from LEVI a couple of months months ago that they were going to use chat cheaper generative AI to create a diversity of models rather than hiring people or buzzfeed recently saying um shareholders meeting that they would use AI to help create authentic black and Latino voices presumably um instead of talking to actual authentic black. So um they didn’t, she was a statement a day or two later saying no, no, no, that’s not what we meant, we meant something else. Um But, but my point is there are ways to think about how you can use generative ai as a nonprofit organization to better reach and connect. But also make sure that you are still doing it in a way as I think all of us have said so far, that really does center people that does center communities and isn’t trying to necessarily replace those relationships.

[00:34:11.43] spk_0:
Beth our our master trainer, I see a need for training for leaders for for for users. I mean, I’m not seeing any of this happening now, I’m not seeing how to use, you know, but is there, is there a training issue here for, for people at all levels? You’re sorry,

[00:35:55.78] spk_3:
sorry about that. I don’t want them back. Absolutely yes. But we, I make a distinction between training and learning. Alright. So training professional development, formal ways of learning particular skills and those might be more around the technology, literacy, literacy skills like, you know, prompt engineering, for example. But then there’s also the informal piece of learning which is informally uh discussions with different teams about how it’s changed their job, right? Or uh or, or reflecting on a job description or, or job workflow that needs to be changed and then sharing that with other departments. Um So, you know, so there’s kind of like workplace learning that is connected to with the workplace culture. Um and which in some ways has nothing to do with the technology. It’s kind of like as a result of the technology. Uh what do we now have the possibility to do because we have this freed up time or because we have not spent so much time staring at a blank screen and not doing anything because of blank screen syndrome. You know, chat DBT has like helped us get to that first draft quicker and maybe human editing has done the second and the third, third draft. Um uh and we’ve gotten a better result. Um And that has improved our end results with our fundraising goals or whatever we’re trying to accomplish. Um you know, what comes next. Um So those are the pieces of learning that, um you know, that haven’t been possible a lot of times in nonprofits because we’re so busy trying to get the stuff done on our to do list and, and or were being overwhelmed. So, um so what, what is possible now that we’re able to do our jobs better and we’re able to take on these different tasks. How can we improve our results? Um And outcomes,

[00:36:24.68] spk_0:
George, how are you teaching your, your clients who are hopefully translating that into learning about using non using generative ai are you, are you talking directly to leaders? Are you, are you training users on, on better like skills like better prompting? What’s what does teaching training look like for you?

[00:38:14.82] spk_4:
I mean, we’ve done our best to put out as much free content as possible, first and foremost, to try to, you know, raise the tide of understanding for nonprofits and we’re putting all of that out as fast as I can think to create it internally. We’re having weekly training sessions on use cases for us and we’re actively building and improving on client custom created GPT uh endpoints that pull their data in and their purpose in. I want to go back though to Beth talking about what actually, you know, education and this looks like and we could train you on how to swim over this podcast. We could talk about all the things you need to do. Like I’m watching my daughter learn to swim. There’s no storybook, there’s no encyclopedia, there’s no webinar that you could watch that would teach you how to swim. There is a fundamental component of this. If you jumping in the water and interacting with the tool learning, coming back, realizing where it frankly lies to you. As I am really happy, we have all pointed out where it hallucinates where it’s helpful and where the opportunities are. And by the way that’s gonna change next month and so it’s not a single point in time and, you know, this, you, you’ve been an engineer for, you know, a while and seen it’s like the, you know, the code you played with, you know, a month ago, it’s just different tomorrow and what’s possible is different tomorrow. Um On the other side of the coin, I’m a little concerned, you know, we have gone through and maybe you’re getting anxiety when you hear yet another tool. Yet another tool. There’s over 1600 tools listed on just one site, future tools dot IO. And there’s going to be even more tomorrow. There are 95% of these things that are just going to be gone within a year. So I’m also cognizant of the rabbit holing that can happen in this.

[00:41:48.75] spk_0:
It’s time for Tony’s take two. I’m doing a Give Butter webinar later this month, debunk the top five myths of Planned Giving. I am especially excited about this one because the Give Butter host Floyd Jones and I are gonna be together co located face to face person to person in person real time. So, uh the energy that he brings and I try to keep things light moving. I think we’re gonna have quite a bit of infotainment on, on this one with Give Butter debunked the top five Myths of Planned Giving and it’s Wednesday, June 14th at two p.m. Eastern time. But you don’t, you don’t need to be there you can get the recording. If you can’t make it live. Watch archive. I used to say that on the show, listen, live or archive now it’s just listen, archive no more live but this is listen, live or archive bonafide. Uh If you want to make a reservation, you go to give butter dot com, then resources and resources and events. Very simple. So make the reservation. If you can join us live, that would be fun because I love to shout folks out and I’ll answer your questions. If you can’t sign up and watch the video, it’s all at give butter dot com resources and then events that is Tony’s take two, we’ve got the boo koo but loads more time for artificial intelligence for nonprofits, I’d like to turn to some of the some of the downsides even more explicitly. So we’re all talking about efficiency and uh the the time time saved the dividend of time. But um at what cost, what potential cost, short term, long term, um We’ve already talked about, you know, they’re being a bias towards dominant voices that are existing, dominant voices remaining dominant. Um For you had a great example of someone in in in India, right? Trying to, trying to represent folks that she can’t get to see. So there, I mean, there’s a potential upside but you know, all this at, at what uh at what potential cost and then there’s, we haven’t even mentioned, we mentioned false information, but in the video realm, deepfakes, video and audio, deepfakes, photograph, deepfakes. Who wants to, who wants. I’m being an egalitarian there who wants to uh launch us into the, the risks and downsides part of the conversation.

[00:41:54.45] spk_1:
I’m happy to start, I’ll say for the record, I am generally an optimist. However, um there, there

[00:42:02.41] spk_0:
are some things uh we’ve taken judicial notice.

[00:44:17.34] spk_1:
Thank you. Thank you for the record. It has been noted, I appreciate that. Um So again, just reiterating what we’ve already said, intentionality really matters here without intentionality. Um Things can go really wrong because General Ai has the ability to hallucinate. Um And because General Ai is reacting to what data already exists, recognize that sometimes the things that decisions that we can make based on that could be really wrong. So um if you can think through and imagine how Ai might be used to help with hiring processes, um even with a more standard version of AI, for example, Amazon a few years ago, put some work into developing a system that would identify people who were best poised to be managers and succeed in senior management at Amazon. The results of the AI show that white men from particular schools were best boys. Is this actually true based on skills? No, but it was based on the data that they had, which was trained on their internal data, which showed being a company and Northwest, it just reflected what their practices had been in some of the things they changed. Amazon end up not rolling that out because they had a human in the loop there that sort of looked at what was coming out and showed that in reviewed and determined this is not actually in line with our values is not in line with what we’re trying to do. Um So I think uh pushes to completely remove a human from that decision making loop are ways that generative ai can go really wrong very quickly in organizations think we’ve already started to talk about some of the bias that can appear in results. Um give the example already with gender that is true for um along a number of other demographics as well. And so not correcting for that or recognizing even that even with these large language models, even with something that’s trained on the internet, um not everyone is represented there. And so making a lot of decisions based on what’s there may not give you and may not give you the most inclusive and equitable response that you want. I think those are two ways that this can go wrong.

[00:44:33.58] spk_0:
Allison anything you wanna, you wanna add to this? Sure.

[00:45:47.94] spk_2:
Um So the AI revolution is far bigger than Chad GPT in generative AI AI is going to be built into every software product that an organization buys in. Finance in hr in, you know, customer service in development. Those products were created by programmers who are generally white men and then trained on historic data sets, which as you just mentioned, are deeply biased as well. So you have a double whammy that by the time the product gets to an organization, it has gender and racial bias baked right into it. This again is why it’s a leadership problem, tony, we need organizations to know what to ask about these products, to ask how it was built, what assumptions were made in building and how it was tested for bias, how you can test for bias before that hr software program you just grabbed through into your mix is screening out all of the black and brown people applying for these positions. So these are real everyday concerns about integrating AI into work and why we need to be careful and strategic and thoughtful about how we’re integrating it into organizations.

[00:47:32.67] spk_3:
Yeah, Beth, I really want to pick up on a point that a film made about um the concern about not having human oversight at all times. And one of my favorite examples of this comes from Kentucky Fried Chicken in Germany. And um they were using a generative ai tool that was um that could develop different promotions that they could put out there. And the data set that it was using was a the calendar of holidays in Germany and of course, then some promotional language like 5% off cheesy chicken, right? Um And they got into trouble because there, there was a lot of social media messaging that was just put out their generated by the generative ai and the message was um happened on November 9th, which is the anniversary of Kristallnacht, which is considered the beginning of the Holocaust. And the, and the promotion was, you know, enjoy $5 off a cheesy chicken to celebrate the night of broken glass. And, you know, and so I think that the issue is, is that we begin to put so much trust into these tools that we think of them as human or the equivalent of human intelligence. And that, you know, we just take it for face value and we don’t have that human intervention with those critical thinking skills. And um and that’s where harm could be done um to the end users. Um So I, I just really think it’s comes back to that co batting example that we’ve talked about and again, the, you know, the need for leaders to really be reflective and strategic in how they executed. It’s not just about learning how the right prompts to ask GPT chat to get a particular output.

[00:48:10.15] spk_0:
There was another example of that uh at, I think it was at a college. Uh they put out a press release and at the bottom of the email, it said, you know, generated by chat GPT or something. I mean, so a human, you’ve all talked about humans being involved with the technology you know, a human hadn’t even scanned it to, uh, to know to take that, that credit line off the, off the email. So, you know, like blind usage.

[00:48:58.01] spk_3:
That’s an interesting thing to, to think about. Like, um, do I disclose, like, if I, if I was writing a post an article and I went to GPT chat to, like, because I needed to get it from 1000 words to 750 words. And I could ask it, you know, too long. Didn’t read standby for some text, please reduce from 1000 words to 750 words um which I actually have used, but I don’t take a cut and paste and I actually sat and compared what it, how did, how did it change the language? And one thing I did notice is it took out any sentences that had a lot of personality to them and it transformed it into this very generic kind of text, you know. So again, it requires a human editorial oversight. If you will,

[00:49:20.80] spk_0:
George, you want to talk about risks downsides.

[00:50:17.62] spk_4:
Yeah, I would say this is more of a bigger picture risk that I see as the net result of we’re talking about GPT tools being built into everything we use. One is that, you know, if, if you were using it blindly, you were the product you’re handing over information. Uh There was a actual open ai hack. Well, a hack or data leak where all of the conversations that were being uh stored on the side were accidentally shared and open. And so I think that’s something to be aware of bigger picture. I am watching very closely. The impacts of chat, first search chat, first search bard and being barred is Google’s AI that is now rolled out out of their private into a public beta is going to destroy organic traffic for information based searches to nonprofits. Inside of what I believe is the next two years. The second order effects of that are so many that we would need several podcasts to understand, but I’m no longer telling clients that we should expect more organic traffic next year. Versus this year.

[00:50:57.37] spk_0:
You experienced this with your own with the whole whale site. You, you had, you had, you did a search and it gave and the search tool gave you back some of your whole whale content. It did credit it. But then your concern was that that credit was purely optional, but right, you, you experience this with your own, with your own intellectual property.

[00:52:14.75] spk_4:
I’m watching it across a lot of, you know, we get roughly 80,000 month in terms of monthly users looking for information that we put out there. I test what that looks like when I do similar searches on bing as well as perplexity dot AI and now barred. The thing that scared me the most is that bar just sort of decided not to even bother with the footnotes in its current iteration and just gave the answer to one of uh several articles that dr significant traffic to our site. There are two types of traffic that S C O is providing. It is informational and then transactional. And so for the informational, I would encourage your organization to do some of these sample searches and begin to plan accordingly. And it makes me a little sad that that part of nonprofits ability to be a part of the conversation when somebody’s asking for, I don’t know information about prep and HIV information or something about L G B T Q rights history doesn’t get you engaged with the organization. It just gives you the answer and there’s something missing there that I think is going to have negative downstream impacts for social impact organizations. And

[00:52:22.87] spk_0:
you expect to see declines in there

[00:52:38.37] spk_4:
will be a decline, significant declines. And that’s concerning to me because it’s cutting non profits out of the conversation that they have traditionally been a part of when people are looking for information. And especially in a time where we’re going to have a rapid increase in disinformation because these tools can be used to create that at scale.

[00:54:19.95] spk_0:
We already have enormous disinformation. It’s hard to imagine it growing exponentially or logo rhythmically. Um I’m interested in what you all think about my concerns. Uh Executive summary that it will make us dumber my my, my reasoning behind that is that a lot of what we’re suggesting, not just us here today, but a lot of what is being suggested is that, you know, it’s, it’s a tool, generative ai is a good tool for a first draft. Uh Beth, you mentioned the Blank Screen syndrome, but to me writing that first draft is the most creative act that we do in writing or in composing, it could be music. And my concern is that if we, if we’re ceding that most creative activity away, and then we’re reducing ourselves to editor or copy editor, not to, not to minimize the folks who make their living editing and copy editing, but it’s not as creative a task for a human as sitting in front of that blank screen or that empty pad for those of us maybe start, maybe start with pen and paper and, and then we’re seeding the most creative activity away and reducing our role to editor, which is an easier job than starting from whole cloth. And so I fear that that will make us uh dumber, reduce our creativity. And I’m saying, you know, generally dumber, you’re all being so polite. You could have just jumped

[00:56:12.96] spk_3:
in. I was well, I, I didn’t want to just interrupt you. Challenge you, but I do want to challenge you. I agree with you, but I also disagree with you. Um So one piece of this one thing that I worry about and it might be um science fiction, but I, um, and I haven’t yet seen research on this, but I do know there’s this thing called Google Brain. You may be familiar with it. Um You’re trying to remember something and you can’t remember it because you haven’t exercised your retrieval muscles from your brain. So you go to Google and you start Googling to, to remember something and it’s a thing called Google Brain. And there was a study that showed that people who were using Google Maps or the other or Apple maps um to navigate. Um it is making their geospatial skills less robust. Um And so the recommendation is you don’t want to completely lose your ability to navigate that you should like get a map, get to go back to a paper map. So there’s definitely some and there is research around this that there’s definitely when you’re doing something in an analog way, if you’re writing it down, it encloses your brain in a different way than if you’re typing it. So the thing that I worry about with this is less about it being creative, taking our creativity away because I think if if you’re trained as a prompted engineer, you could be trained to like brainstorm with it right in a way that sparks your creativity versus takes it away. But what I’m worried about is how does this affect, how will this affect the human brain? Um You know, down the road another decade or so that if we’re not using our brain skills of encoding information and retrieving information and it’s like a muscle, you know, is that going to make us more at risk for dementia or Alzheimer’s down the road? Um, I know it sounds crazy but that’s like the thing I worry about.

[00:56:47.28] spk_0:
I don’t think it’s crazy. That, that’s what I’m concerned about. I’m, I’m concerned on a world level that we all collectively will, will just not be as creative and I’m calling that will be dumber. I

[00:57:49.77] spk_1:
don’t think the amount of creativity and innovation is sort of finite and that if we use tools that we’re no longer going to be creative, I think we have computers now to help us draw, to help us um write, we can write on a computer versus before we had to use different paper, we had to only draw with a limited set of tools when we got, um you know, computer aided graphics and more, we just had more different ways to see the world, more different ways to uh to figure out what images we wanted to see and how we wanted to engage. Also someone who likes to write a lot. I’d say I’m really grateful for my editors and the fat that their brains were different than mine do when I start writing. And so um those skills are complementary. But I say that because I think that we will have to change sort of will evolve, how we think, what we think about and how we work. But I think that is a different type of creativity, different types of innovation rather than us just no longer being creative. Yeah,

[00:57:55.80] spk_0:
I didn’t mean eliminate our creativity but reduce it. It’s

[00:58:10.94] spk_2:
important tony to stay out of these binary arguments of AI is so bad or AI is so good, it is going to be a mix as technology always has been. I was just reading a book the other day that talked about the introduction of moving pictures and how how appalled people were that, you know, they could see these images over and over again, right? And was going to take away all of people’s creativity.

[00:58:23.12] spk_0:
The same thing when when silent movies became talking,

[00:58:36.56] spk_2:
you know, we do this every time we are changing our brains. I’m not saying that we aren’t, however, there is going to be an explosion of creativity of jobs we haven’t thought of yet of opportunities, we haven’t thought of that comes out of this next chapter that we are just beginning now. And I think it’s important to go into this with as much information as we can cautiously again, but with a sense of X with a sense of excitement and adventure as part of this because something really, really interesting is about to unfold.

[01:00:49.90] spk_3:
And I just want to also affirm what Allison just said this kind of new creativity and it was making me think of. Um I think it was about a year ago that dolly came out, which is the image generator um that works by looking at patterns and pixels of images that are on the internet. Um And, and create something new based on your response. And I know um and I heard an artist talking about this, like, you know, there’s this whole debate about, you know, should, is it our tools like dolly that are analyzing pixel patterns and images created by real artists? Are they stealing their work without their consent or without their compensation or is it or is this like creative thinking tool? So I, you know, I was messing around and I have a black and white Labrador party, you know, a Labradoodle party, black and white guy. And so I, I asked, you know, create a image of a black and white party. Labradoodle surfing a wave and the style of Hokusai. And it generated for um images in the style of Hokusai. Some of them were silly. Some of them were, oh, this is really interesting and it prompted me, oh, what would it do if I asked it to do this in the style of Van Gogh or the style of money? And then I started getting all these other ideas about things that I wanted to do. And before I knew it, I had 1000 different images of a black and white party. Labradoodle doing all kinds of things that I wouldn’t even have thought of if I hadn’t seen, like, the response that it gave me from the first one. Um, but so is that different than if I were to, if I just did a brainstorm with myself about what I could draw, if I could draw anything, or is this aided creativity much in the way that an artist would go out, you know, and look at landscapes for inspiration.

[01:01:22.10] spk_2:
Yeah. Now one place, one place in a lot of trouble, tony is the fact that our policy makers are so far behind on AI, right, we’re gonna have enormous copyright issues. We have enormous ethical issues coming up of when AI should be used in policing. The department of Defense is experimenting right now with completely automated lethal drone weapons. Is that really who we want to be that we have robots killing people without any human oversight on the ground at all or, or in, you know, some, some headquarters at all, there are really profound policy issues that we should be talking about right now and we are way behind on those

[01:01:51.16] spk_0:
George you wanna comment on the role of government or, or push back on my

[01:02:45.37] spk_4:
uh the role of government is beyond my pay grade. If I’m honest, um you know, I’ll stick to my scope. I will say though tony in 2004, podcasting became a thing, new technology before that there were gatekeepers there and I think you’ve done very well as like as far as I know the longest running podcast for nonprofits, like it opens up new opportunities. There are over two million images created on Dolly per day and that was back in October. So I’m willing to bet it is increase the output, you know, at, um and on a personal level, like it has increased my output and I have, you know, had a lot of fun building and working with it. And as it, you know, unblocked me for, for the new creation of content undeniably though the way we use tools then shapes the way we change. And I do agree, there is a depth of knowledge potentially lost in being able to simply say, write me an article about this thing and then I tweak it as opposed to that part of learning an approach. And I think academia is um really reeling from how to teach this next generation. And I’m, I’m curiously watching how they train the next generation of people coming into the workforce on

[01:03:24.54] spk_0:
you all gave, well, let me say you all gave your all optimistic about your, your, your, your all probably more optimistic. I’m, I’m, I don’t know if I’m skeptical, I’m just concerned, I’m just concerned about the dumbing down of the culture and the culture, meaning the world

[01:03:31.72] spk_2:
culture, you

[01:03:33.67] spk_1:
know,

[01:03:36.64] spk_2:
have you seen our culture? How much dumber?

[01:03:39.30] spk_0:
Yeah, we’re starting at a pretty low level. That’s, that’s how bad I think it could get. Yeah. Yeah,

[01:05:17.38] spk_1:
I just wanted to uh um just emphasizes, I don’t think we spend enough time on one of Alison’s last points about the, um the copyright issues, the ownership issues, even as the data economy has exploded since the age of big data was declared. Um We have created systems that really extract from certain people, some certain populations, historically marginalized populations rather than enable and empower these same populations who stated we then rely on or I should say corporations in general sometimes oftentimes nonprofits as well. Um And that is just um increased at scale with generative ai with AI more broadly, right? And that um you know, especially with generative ai and things that scrape the whole internet of things that people put out there no longer as George uh mentioned no longer at attributing sources, no longer pointing to source material, no longer giving credit to people. Uh Same with artists and music and others. I think that is a huge issue. And I think one um from an ethical perspective, ethical perspective, especially for a nonprofit whose mission is to empower marginalized communities. And that’s a particular nonprofits mission. It’s a big question to consider of how and when should you use generative ai systems that do not um attribute information. Um And don’t sort of close that loop back to the people who powered the systems?

[01:05:25.25] spk_0:
All right.

[01:05:26.81] spk_1:
I don’t know, that’s a positive note, but it’s a note that was,

[01:07:14.66] spk_0:
that was more mixed and positive but great valuable points, you know, great promise um with potential catches and leadership, the importance of leadership and, and proper usage and all. All right, thanks to everybody for Bruce, you’ll find her on Twitter at underscore Bruce. She’s principle of A and B advisory group, Allison, fine president of every dot org where there are fires to put out. You find Alison on linkedin, Beth Cantor at Beth Kanter dot org and George Weiner, Ceo of whole Whale whole Whale dot com and Georges on linkedin. Thanks everybody. Thanks very, very much. Next week. What power really sounds like using your voice to lead and using your executive skills if you missed any part of this week’s show, I beseech you find it at tony-martignetti dot com. We’re sponsored by Donor Box with intuitive fundraising software from donor box. Your donors give four times faster helping you help others donor box dot org. Our creative producer is Claire Meyerhoff. The shows social media is by Susan Chavez Marc Silverman is our web guy and this music is by Scott Stein. Thank you for that affirmation. Scotty B with me next week for nonprofit radio, big nonprofit ideas for the other 95% go out and be great.