Tag Archives: Brightlines

Nonprofit Radio for May 5, 2025: PII In The Age Of AI & Balance AI Ethics And Innovation

Kim Snyder & Shauna Dillavou: PII In The Age Of AI

Artificial Intelligence and big data have transformed privacy risks by enabling malicious, targeted communications to your team that seem authentic because they contain highly accurate information. Kim Snyder and Shauna Dillavou explain the risks your nonprofit faces and what you can do to protect your mission. Kim is from RoundTable Technology and Shauna is CEO of Brightlines. This continues our coverage of the 2025 Nonprofit Technology Conference (#25NTC).

 

Gozi EgbuonuBalance AI Ethics And Innovation

Gozi Egbuonu encourages you to adopt Artificial Intelligence responsibly, in a human-centered approach. First, be thoughtful with the threshold question, “Should we use AI?” If you go ahead: Create a thorough use policy; overcome common challenges like staff training and identifying champions; manage change intentionally; and more. Gozi is with Technology Association of Grantmakers. This is also part of our #25NTC coverage.

 

Listen to the podcast

Get Nonprofit Radio insider alerts

I love our sponsor!

Donorbox: Powerful fundraising features made refreshingly easy.

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

Welcome to Tony Martignetti Nonprofit Radio, big nonprofit ideas for the other 95%. I’m your aptly named host and the podfather of your favorite hebdominal podcast. Oh, I’m glad you’re with us. I’d turned dromatropic if you unnerved me with the idea that you missed this week’s show. Here’s our associate producer Kate to introduce it. Hey, Tony. Our 25 NTC coverage continues with. PII in the age of AI. Artificial intelligence and big data have transformed privacy risks by enabling malicious targeted communications to your team that seem authentic because they contain highly accurate information. Kim Snyder and Shawna Deleu explain the risks your nonprofit faces and what you can do to protect your mission. Kim is from Round Table Technology, and Shawna is CEO of Bright Lines. Then Balance AI ethics and innovation. Gozi Egbuonu encourages you to adopt artificial intelligence responsibly in a human-centered approach. First, be thoughtful with the threshold question. Should we use AI? If you go ahead, create a thorough use policy, overcome common challenges like staff training and identifying champions, manage change intentionally, and more. Gozi is with Technology Association of Grantmakers. On Tony’s take 2. Tales from the gym in addition to my gratitudes. Here is PII in the age of AI. Hello and welcome to Tony Martignetti Nonprofit Radio coverage of 25 NTC, the nonprofit Technology Conference. We’re all together at the Baltimore Convention Center, where our coverage of 25 NTC is sponsored by Heller Consulting Technology services for nonprofits. Our subject right now is PII in the age of AI. Personally identifiable information in the age of artificial intelligence, safeguarding privacy in a data powered world plus we’re adding in the topic. Alright, already the show’s over. I wanna thank you all for coming. Uh, we’re, we’re here all week. Uh, be sure to tip your servers, um, and we’re adding in the topic a little more privacy please. Colin, diving into data privacy. All right, because, uh, our guests, um. Ask to combine topics which made a lot of sense. Um, but, uh, before I introduce the guest, well, now, let’s do it this way. So we have, uh, stand by there. We have, uh, first is, uh, Kim Snyder. Kim Snyder, um. I gotta take a deep breath. I do, uh, Kim’s title. I’m gonna hyperventilate trying to get enough air to oxygen in. I’m only 140 pounds. I don’t carry enough in my lungs to carry this, to carry this title of virtual digital privacy Project and program officer. You know Joshua Pesca is thanked for that word salad of it’s all nouns. It’s all it’s all one adjective. 12 nouns. Joshua, you’re, you’re out. Anyway, and then CEO doesn’t get any easier. OK. Also with us, uh we have a special guest who’s gonna give a couple of syllables. Uh, let me introduce Miles. Miles, say hello. Hi everyone, it’s Miles with Fundraise up. Thanks Tony. My pleasure. Miles is sponsoring the hub next door at Fundraise Up, so I, I thought I’d give him a little. He asked to give a shout out, so I said sure. And uh they’re giving away free socks. That’s what fundraise Up is all about socks and what else do you do at fundraise. Right, so we help nonprofits raise more money with AI and we do that by not using any identifiable information and are completely compliant across the globe. All right, that’s what a segue and not even reversal incredible. All right, you’ve overstayed your welcome. That’s enough. OK. OK. OK, thank you, Miles. No, thank you. I, he was, I, I did invite him after he pleaded. OK. So we are talking about PII. So Miles, a perfect segue, beautiful segue into personally identifiable information. Uh, Amy, we’re gonna do the overview, so I’m gonna ask Kim. Virtual digital data, virtual digital privacy project and program officer. I’m gonna ask Kim Snyder. No, I’m gonna, no, I’m hitting it hard. Uh, so for an overview, why, why do we, why do we combine these two topics? What are our issues around personally identifiable information and, uh, and artificial intelligence? Kim Snyder. So they both center on the issue of personally identifiable information. So on the one hand we’re talking about what kinds of regulations exist, how do you manage your data I’m too far away. Don’t whisper, Kim. Everybody hears you. Oh, go ahead. I’m waiting. Um, now you, you edit this, don’t count on too many edits. Oh dear, OK, alright, so, um, we’re talking about personally identifiable information which for quite a while for the last couple of NTCs have been talking about this here and. For quite a while it’s been about more about regulation this year I have to say it’s about having our data out there and vulnerability and so looking at data management and how do you start to take stock of your data so that it is less vulnerable and the person the people whose data it belongs to is also less vulnerable and the other topic which I’m here with my co-facilitator um. Uh, Shawna is with all the amens and I’m here. I’m just like I’m a man, yeah, in the, yeah, so, so talking about how that what constitutes personally identifiable information, how much that’s expanded in recent years and Shawna, what’s what’s your bright lines, how are you related to. Yeah, yeah, so Bright Lines, I founded it 4 years ago. We are a doxing prevention company for folks who don’t maybe know what doxing means. Yeah, it’s define it please. When folks will use your personal information or sensitive information, they’ll post it publicly, essentially posting your documents, that’s where doxing comes from with the intent to incite others to do you harm. So there’s like a malevolence there, right? I don’t usually consider it doxing if someone posts like. A relatively available email address from like a professional setting. I do consider it doxing when it’s your personal email address and the intent is to ask others. It could be your birthday, it could be, could be your wife’s or my man right here, yeah. the PII PII is an expanded. No, I never, no, no, actually I came out of US intelligence community. I was there as a much younger person and in a different age in the United States and in terms of our national security. It was really progressive national security person, um. The whole community, yeah, the I I’ll just say the I mean the intelligence community, yeah, yeah, I don’t usually get too granular with that um but the. Was it in the session description it would have said OK yeah we can talk about that. OK, well, I, I’m not sure I’m, I’m pretty sure, but there again it’s one thing when it’s like out on the airwaves. First is when it’s in like a session thing yeah and at at the time when I was there I was detailed out to the DEA this might have been what you read, to train them on finding their targets on the US side of the border of drug trafficking organizations so we were using these same techniques. I was training them in these like techniques to find people. We reverse engineered that now four years ago after the 2020 election when. Folks were going after Ruby Freeman and Shay Moss for just passing a piece of gum while tallying ballots in Georgia they have a penthouse in Manhattan now have the keys to that penthouses. Um, OK, interesting. So reverse engineer I see reverse engineered your, uh your prior prior work. All right. um, so referring to your session description, uh, how AI and big data are transforming privacy risks by enabling aggregation. So your concern is that the, the. Attempts at uh. Spamming people, not spamming but spoofing, fishing, they can, it can be so granular and so accurate that they, they look more and more real. This is a part of our problem, right? OK, and people and agencies, people are using artificial intelligence to gather this information and then and then put it together and collate and then threaten. So they will, so I think we could probably tag team on this. Do you wanna do the production part? So what we see is them gathering data. There’s a lot of data that’s out there about all of us, and I will. If there’s one point folks take away from me talking today in addition to my hype madness, it’s that this is not your fault. Our clients come to us and they say, oh, if I just hadn’t shared so much on public on social media publicly when I was younger and it’s like no no this had nothing to do with you. Your public records are being scraped by data brokers every day. If you own a property, if you’ve ever registered to vote someplace, if you have a driver’s license, which you have to have if you wanna get on an airplane, that data is being sold or scraped. So that’s the data that’s the source data for data brokers. So yeah, sometimes for free, for a, yep, OK, but publicly available, you don’t need to be, not an agency there’s no kind of like legal process to gather it exactly. This is why law enforcement officers, like certain law enforcement agencies now go around legal process and we’ll just buy data from data brokers. Oh, so much easier than defending a subpoena. to prove it to a judge to prove it to a judge and then if this if they move to quash the subpoena, you have to defend it. Exactly. So AI can now gather data from various sources, so it could be used to scrape these sites. It can then be used to connect data. Let me share a story. We got a phone call like a very concerned client. They had just received a phone call themselves from someone who claimed to have. Photos of theirs compromising photos from an old Snapchat account and on the call they described a photo that this that our client knew they’d taken right it was a photo of a room they were describing a room and the clients like, I remember that room. I remember that poster that they’re describing. I think I might have posted it on Instagram one point it was public, but how did they get my number? How do they know where I work and. My response was like, this is a scam. Someone scraped, someone bought a scra of LinkedIn. Maybe they connected that to your phone number. Maybe you have your phone number connected to LinkedIn because you use it from MFA for multi-factor authentication. They connected that to a handle on Instagram, probably using your face, a facial recognition. And then they just made this phone call and talked to you about your employer finding out about these photos, which was a bluff because your employer’s name is listed on your LinkedIn profile. It’s terrifying for her. And Kim has taken it a step further. So you can stitch all this together, right? and you can process all this data at speeds that never were possible before, but you can also use generative tools to create things so you can. Easily mimic a style of someone so you can also so you part of that data that you grab off of LinkedIn or social is somebody’s writing style so you can, you know, generative AI is really great tone and style and also events. So if you’re posting about events and things happening you could get. An email from your purportedly from your executive director or a colleague referencing that event and things that happened and people who were at that meeting it depends on how public the data is and then you know that can be used as a basis for a you know phishing email um that is a lot more convincing phone call yeah or a phone call this person that called our client was a human but they don’t have to be we’ve seen cases where EDs are being impersonated. And it’s video and it’s audio of them that is so convincing to the people that they’re reaching out to and this is it’s trivially easy to do right in our session in fact we had which one is the real Kim and there were two videos of me and one of them was not me um it was AI me but that cost me $29. To take that, so it’s not inaccessible. These tools used to be it used to be like really hard to do this or 25 cents and it’s like a photo in 3 seconds of audio, and they can make those videos, yeah, and you can have me say you don’t even need me saying the alphabet or or Kim’s title for Christ’s sake or half of Kim’s title. I did say you could swear. I didn’t say you could take the name of the Lord. There’s a difference. There’s a difference. There are boundaries even on nonprofit, there are boundaries. This is Chris. I’ve, uh, I’ve gotten, I’ve gotten these, uh. Dear Tony, I know I could have called you at my number or or written to you at my address accurately, uh, but I chose this method instead. So now I know they’ve got my email and my phone and my address, uh, included a picture of my home, which they probably got from Google Maps or, or right, and, uh, I, I some kind of bitcoin bitcoin scam. But how did that make you feel uh the first one I was a little like. Yeah, I was a little nervous, but, but I’ve gotten, uh, we all have gotten Bitcoin scams in the past, but this one had, like, you know, like you’re concerned that amount of information a lot of, yeah, yeah, it had the right and uh I, you know, I, I ignored it with some trepidation and then like a day or two later I got another one and you know I knew I was just kept coming. It was bullshit. Yeah, I saw one of those from one of our threat intelligence partners, someone who swims in this every day, and it terrified him and his wife. Yeah, because it’s so it’s so close to you. It’s why receiving one of those phone calls or back in the, I would say back in the day I got really energized around Gamergate started to try to support the folks who are being targeted by Gamergate. This is back in 2015, and they would describe what it was like to have like, you know, I sleep with my phone next to my bed. And or under my pillow and to have that be the stream of all of this like directed hate messages like you should kill yourself or I’m gonna do this to you or I’m going to do this to your parents or whatever the case might be. It’s so proximate that technology removes what feels like barriers between you and everyone else, and the issue with doxing so terrifying is that you don’t know who it is. It could be anybody. How do you walk down the street? How do you like sleep in your home, not terrified? You don’t know. I never thought about that. Who’s coming after you? Thank you. I never thought you bet new nightmare unlocked. Yeah, no, no, you know how, but Tony, so you get these things because you’re you’re killing me. It’s supposed to be reassuring us here on nonprofit radio. Well, you’re terrifying. We’ll get to that. We will get to that party eventually we’re we’re great parties, but, but, OK, so you’re, you know, more public person, uh, you, you know, nonprofit radio, so, so you. Get these things it’s a little unsettling and unnerving for you, right? yeah like so imagine how like a nonprofit staff person who happens to be working in an organization that may be more targeted by malicious actors, OK, so one is so your staff member starts to experience this and this may this could freak people out, right? So that’s who we’re thinking about. Um, and kind of raising the awareness, OK, yeah, I mean these are folks already dealing with some level of cortisol at a on a regular basis because of work because of their mission. I think we’ve spent enough time on motivation, and let’s let’s, uh, let’s let’s transition, uh, not subtly very abruptly to what the hell do we do? What do we do it already. Is it already too late? It’s never too late. I’m sure you’re not gonna say it’s too late. No, I wouldn’t be here. Yeah, well, I also believe it and I’ve had those moments. Listen, I live in DC and DC DC Health Link had their data leaked and taken a number of years ago and my child who had not even turned a year old had her social security number lost in that breach and I was like, oh man, she’s not a year old, you know, like how is this? This is the world we live in, right? And I turned to my partner and I was like, this is just, I don’t even know why we bother. And she’s like, you can’t, you of all people can’t have that feeling. It’s OK that you do right now, but you have to keep going. No, there are plenty of ways to ameliorate it. Yes, let’s get, let’s get into them. So what we’re with you. Why don’t we start? Go ahead and then we’ll go to Kim. Yeah, I think you can think about this so the individual as the vector to threat to the organization that can be reputational financial threats to the organization could make it hard to fundraise if you don’t support that person very well. Um, you, you would harm your reputation, say, or, um, it could make you look illegitimate to your funders, right? So if you can think about where the risks are to the organization, that’s one set of what to do, right, action items, and I might leave that with you and speak more to the personal. So when it comes to protecting yourself as an individual, there are plenty of ways that you can work to remove your data online was referring to Kim, not me. Oh yeah, no, Tony’s not gonna take that part no Kim’s got that, um, Kim. I won’t try your title um when it comes to the individual, listen, all of us have data out there again it’s not our fault we have lived a life, right? Like we’ve done things it’s, I think it’s a betrayal of trust in our own local governments that they sell this data and no one’s ever asked us for consent they’ve never informed us, etc. etc. etc. OK, so what do you do? You can sign up for one of those services that removes your data from data brokers we consider that like um. Like taking Advil, right? Like it’s like kind of taking care of some of the pain and some of the symptoms. What we also recommend is like looking back to the source data itself. So if you own a property that you live in, we always recommend that people consider moving it into a revocable trust that they don’t name for themselves. You’ve seen too many estate attorneys call it the Tony Martignetti revocable trust. Exactly exactly a different a different name to the revocable trust. That’s it. So now the ownership is obscured its data that’s already out there from prest. This is the argument that our interstate attorney always gives us and we have to educate them on this. They’ll say, oh, but it’s your name’s gonna be on the document granting it to the trust, but your name was there before on tax documents. The way data brokers work is that they’re constantly pulling this data down and renewing their data set. So when the new data comes down at this address, they want the most accurate, the most recent. they’ll overwrite it. So it may be that you lived at that address at one time but you don’t any longer and if someone’s looking for that address, it’s not your name on it. So it will get overwritten, especially over time. What we’ve seen wildly enough is that when that piece comes out, it’s like a house of cards. When you pull that property record out the rest of it tends to fall apart. We see our clients less and less on ownership is kind of a uh. a core or a hub to to other data yeah absolutely yeah I think there’s some connections happening there with like app user data that’s also on an ISP that’s connected to the house, etc. etc. is there other pieces about that location um that create profiles anything else we can do on an individual level besides the uh property ownership. Another big vector is voter data and I know that’s probably not popular in this audience because a lot of folks believe a lot in the voter file and voter data and using it and I, we often see voter data on getting used mm. Getting bought and getting scraped and so we will recommend that folks apply for programs in their states called address confidentiality programs or safe at home programs they’re always set up in with uh survivors of intimate partner violence in mind but a lot of the programs are pretty expansive, so if folks are concerned about stalking or harassment they can also apply and that then gives them a proxy address in some states like in New York across all agencies. So the DMV is now not going to sell your home address and your name. They’re going to sell your your name and your proxy address together. And and shout out the names of those programs that you would look for at your state. Address confidentiality program or safe at home. If you’re interested, the National Network to End Domestic Violence NNEDV.org has a comprehensive up to-date list of those programs. OK, awesome. Kim, uh, before we turn to Kim, uh I think you’re the perfect question perfect question answered. Person, you’re a person, you’re a person. You’re neither a question nor an answer. You’re you’re just a person with a lot of answers. Um, I read once, it’s so hard to unforget, you know, to unlearn things that, uh, the value of, of stolen data is really in the future is more financial like so that the bad actor can act without you tying it to a specific event. So my credit card, let’s say a credit card number is compromised, it’s of more value if it’s 3 years old than if it’s just a couple of weeks it was just stolen a couple weeks ago. Is that true or is that incorrect? I can see that. I can see that being true. Maybe we’ve gotten a little bit better banks and credit cards have gotten better about just reissuing new cards. Websites tend to push you to change your password when they’ve alerted you that there’s a breach, so I, I think. The private companies more so in government agencies but private companies I think have caught on to that a little bit and I think there is some truth if it’s not for financial means but really someone trying to go after you, we call that a ideologically motivated attacker. What we saw you used the word vector before I did, yeah this is my background so they um. What we found with uh a university, a client that’s a university, their students were being targeted. Some of these outside groups showed up to student houses over the summer. The students had already graduated. We’ve gotten some of their address stuff removed. The addresses weren’t available in connection to their names online any longer. So what we think happened was that those addresses that was screenshot and saved. That can happen, yeah, so it’s not a perfect fix. However, what if you have one as an intelligence officer, if you have one data point, so you have that screenshot, but then you have all these other things telling you that Shawna Dilla no longer lives at that screenshot address, you might show up there, but you’re not gonna spend a lot of time on it because you can’t verify it. You can’t confirm it with another source. Makes sense? Yes, thank you, thank you. All right, Kim, let’s turn to you on the organizational level. What, uh, what can we do, uh, there to. Protect ourselves from what’s already out there. How do we help nonprofits and small and midsize are our listeners. Alright, so for many years the the kind of mantra has been to verify, verify, verify verify. I thank you very much, that’s Kim Snyder and Shawna. No, I’m joking. She’s like I’m we’re out of time. No, we’re out of time. Are we out of time? No, I’m only child I fall for jokes very easily. I wish I had known. I wish I had so many. I had so many more. I had so many more in mind for you specifically talking about a targeted attack. Oh my, talk about a vector vector I was coming right at you. I could have written that you’re you’re putting this on the airwaves. You know how vulnerable you are. Oh man, I got all kinds of advantages. All right, I’m sorry, I interrupted you. What was I talking about dying. Go ahead. OK I’m sorry. OK, so we used to talk in cybersecurity world about, you know, verification verify, verify, verify that was the mantra, right? So now we kind of reshape that so that it’s vet and verify so have kind of multiple ways of verifying especially incoming requests. Anything kind of trust your spider sense is what I’d say if something seems a little bit off like what what are we talking about? So if you receive an email, if an email comes and it, you know, it comes from your development director who’s saying who’s referencing something that you just went to the panel or if it comes from accounting, write a check if any money is involved. And it wasn’t like completely expected even if it was a little expected actually I’ve seen I’ve seen this happen where people got into um nonprofit systems and using AI can scan what’s going on very quickly. And then target things that are about to happen from kind of things that are OK, so, so I would, so the instinct instinct, OK, use your, use your instinct but also make it a policy, make it a process that you just follow uncomplicated process for verifying like any financial transaction needs to be verified even if it’s expected, yeah, so yeah, so you wanna walk through that. You just get much, much more deliberate. About verification and and who is it coming from and you don’t want to. Confirming, did you send this email or not replying to the email, but my phone yeah exactly yeah you you send this email about this rush transaction or or routine transaction. Do it in a different format right different channel, yeah, so you know, and even though the instinct may be email back quickly but no right um but then what you do also is create a culture in your organization where that’s OK to do where it’s OK to take that extra 30 seconds minute to you know verify to ask someone for their time to say I just wanna check, did you send this to me? Um, and in that way it’s OK even if it’s because he’s actually director you can say, did you send this to me? I just wanna make sure and so that that’s an OK thing to do. In fact, that’s a good thing to do. Now we can’t they have to be boundaries around this because we can’t do it for every, every message we get so you mentioned. financial financial transactions and no no no not nervous at all financial no no no financial transactions, any kind of initiated correspondence where they’re asking you for something or for some information. I saw a scam recently where the uh an an old employee was trying to be reinstated and wanted to go around HR to IT to get their accounts reset up like I’m I’m coming back and it was like using the person’s middle name so it’s already a little bit fishy but. They went all the way up to the CTO of the of the company and said hey so and so and these people were friends on LinkedIn and like had shared messages back and forth so the attacker knew this was a personal relationship. hey so and so I’m trying to get reinstated. They’re telling me you need to go to HR, but like I but I can do this. I just need to get my account access back up and online and the CTO is like no. Oh bro, you gotta go through HR. I can’t do anything because they had those controls in place, but small and let’s be fair, small and medium sized organizations don’t, so I’ll just take care of it now or we don’t have a, we don’t have a we don’t have any clear guidelines that we give to people for all requests we need to go to HR. I thought of another. Potentially nefarious request you send your logo. Could you, could you, I need a I need a high def for the logo, you know, the, the, the, the JPEG I have is, is not good. I need a high definition logo that could be that could be to produce a check that could be to make a spoof a spare a spoof website, um, OK, I mean, but it seems innocuous send a logo, yeah, it’s very easy to spoof a website, right? So you know, you know, check. Also check where it’s coming from, right? So you know I’ve had an organization where there were two spoofed, um, there’s spoofs on both ends a spoof of the funder, a spoof of the the grantee. Can you tell us more about that story? It’s a really good one. So yeah, so they, they got into an organization’s, um, you know, Microsoft environment. I asked the questions here whoops. Go ahead. Uh oh, off the mic. 3 like 30, go ahead. So, um, Anyway, that’s late in the day. And I’m thirsty. Yeah, late in the day it’s not it’s, it’s well it’s almost 3 o’clock. You’ve been going since then nonstop. Um, anyway, all right. So the organization had someone get into their systems for a very short time, but in that short time they were able to tease out some information again this is AI can help with this kind of analysis short you know canal is a lot of data that it can grab very quickly and um identified some upcoming financial transactions which were rather large and so um in order to kind of trick. The person to sending to the wrong place, they set up fake websites, fake websites for the foundation, fake websites for the grantee, and domains not websites domains, and so then they had emails coming back and forth you could hardly see the difference and so the, the, the real people, the real people were communicating with the bad actor on both sides and the money. And he got sent to the wrong place, OK. Yeah, that was, that was actually no they did great, but, but it was that was a happy ending, but not necessarily. We started with Shawna, so we’re gonna end with Kim. give us oh no we did OK well I’m not Shawna, your mic is down but that she still gets through. She talks and laughs so loud you hear her over Kim’s mic. No, I didn’t, I did not but one more thing before, before we unless we’re totally out of time, um, don’t shoot the messenger. So create a culture. This is another thing that’s any size nonprofit can do where if something happens, if you click on that thing, if you did that thing that you feel like uh. That was really dumb, right? Make it OK to report that and you don’t get in trouble and there’s no shame and blame because it happens so but yeah the the no blame kind of we encourage you to. You know, say it, yeah, call yourself out, yeah, and there’s no punishment, you know, some organizations like they don’t want bad news at the top, so. All right, we’re gonna leave it there, OK? All right. That’s Kim Snyder. Virtual digital privacy project and program officer Roundtable Technology and Shana Dela Vu, CEO CEO Bright lines. Thank you, Kim. Thank you, Shawna. It’s a pleasure. Shawna laughed her ass off. I’m a good sense of humor. All right, I love it. Uh, and thank you for being with a, uh, well, whimsical, I’m not sure it covers it. Raucous maybe, uh, at one point, uh, uh, uh, anarchical because, uh, there was a question that I did not answer. Uh, session. Uh, thank you for being with us at uh 25 NTC for this episode sponsored by Heller Consulting. Technology services for nonprofits, virtual digital privacy project and program officers. It’s time for Tony’s Take-2. Thank you, Kate. A new tales from the gym episode just happened this morning, this very morning. I was minding my own business as I do on the elliptical. And overheard two women talking. One lives here permanently, and the other one who said her name. Sandra Lynn, uh, she lives in North Carolina, but not here in Emerald Isle. She lives, uh. In the Raleigh area, like that’s about 3.5 hours, 4 hours away, roughly. And she was lamenting, Sandra Lan was that uh that she can’t live here full time, house prices are high. And she also still has, uh, her mother and her father-in-law, so her husband’s father are still both alive, and so she needs to stay in that area, but she was, you know, looking forward to retiring here sometime but lamenting that she couldn’t live here now. And that got me thinking as I was on my. 6th or 7th uh interval on the elliptical. I do 88 episode 8, Not episodes. What did I just say? 8 intervals. I do 7 intervals of a minute, take a minute in between, and then the last interval is 2.5 minutes. I was toward the end and it got me thinking, listening to Sandra Lynn. That, uh, I’m grateful that I do live here full time, permanent. This is my home. And that, you know, it’s that there are other people who don’t live here who wish they could, you know, so, uh, you know, I, I add, I have, I have a long list of gratitudes, but I don’t specifically say grateful that I live here in Emerald Isle full time. So I’m gonna add that to my gratitudes that I do every, I guess I’ve told you every 2-3 times a week. I’m adding. Gratitude that I live here in Emerald Isle full time in this beautiful place and I have the ocean across the street. Uh, your own gratitudes. I hope you’re, I hope you’re doing your gratitudes out loud, at least a couple of times a week. That is Tony’s take too. Kate. You do sets. Uh, well, sets are for, yeah, no, that’s different intervals. Intervals on an elliptical, you do a minute hard and then a minute resting. And then a minute hard and a minute resting, it’s called high intensity interval training, HIIT high intensity. It just means you do intervals of things like you sprint, yeah, I don’t run, I’m on elliptical, but you might sprint and then walk, and then sprint and then walk and sprint and walk. Those are called intervals. Sets are like you do 3 sets of 10 if you’re, if you’re on a weight machine or something like that, or maybe pushups, might be 3 sets of 10 or something like that. I don’t know, they seem, there seems to be a different, well, I think the interval is because you’re still active, you’re just resting in between the high intensity intervals. Gotcha. That makes sense? Yes, and I am grateful that you have a beach house. Yeah, because you get to, yeah, you get to visit and uh laze around and uh. What is the word I’m looking for, uh, not schmooze, but, uh, you get to, uh, I don’t know. I can pretend that it’s my beach house. Yeah. You can for a week, yes, but then, then I’m very happy to say goodbye. After a week. Love you too. We’ve got bou but loads more time. Here is balance AI ethics and innovation. Hello and welcome to Tony Martignetti nonprofit Radio coverage of 25 NTC, the 2025 nonprofit Technology Conference, where our coverage is sponsored by Heller Consulting technology services for nonprofits. With me now is Gozi Egwanu. Gozi is director of programs at the Technology Association of Grant Makers. Gozi, welcome to nonprofit Radio. Awesome. Thank you for having me, Tony. Pleasure. You’re welcome. Your session is AI strategy for nonprofits, navigate ethics and innovation. We have plenty of time together, but can you give me a high level view of the the topic and the session that you did? Sure. So the session was really, um, and was really spearheaded by Beth Cantor, uh, and it basically provides uh a balcony view of where we are in the sector in terms of AI adoption, ethical responsible AI adoption, the nonprofit and philanthropy sector. And so, uh, we really start with what we found in the Technology Association of Grantmakers state of Philanthropy tech survey that we did in 2024. In that survey we found what many grant makers are currently doing with AI as far as you know are they testing are they experimenting? Has anyone rolled it out enterprise level, which is, you know, at the organization wide level and what we found is that. And which mirrors quite what we’re seeing in the nonprofit world is that most folks are not using AI in terms of, you know, anything that’s crazy, you know, innovative at this moment it’s really just kind of, you know, meeting summaries, you know, taking notes, that sort of thing, um, and so and but in addition to that we found that while 81% of folks are using AI, uh, sorry, while, uh, oh sorry, 81% are using AI but only 30% have AI use policies, so. You’re using it but you don’t have any guard rails you have no way to tell your teams or your staff, hey, this is what we don’t put into the AI this is what we do put in so you’re really running the risk of having your information potentially used in a way or trained uh an AI model that, um, you know, could potentially put your members at risk, your grantees at risk, whatever the case is for your organization and so. With that little bit of an overview it basically came down to the importance of AI experimentation and really do starting slow starting at the very base level working with your teams to kind of talk through should we use AI if we did use AI what would that be for? So thinking about the use cases, the business, um, the business use like what what would be the business case for it and then you know assembling a nice team of folks, you know, as advisers or experimenters and champions at your organization. Uh, to really kind of help you all start doing that experimentation in a safe and low kind of like low risk way, um, and then from there really defining whether or not AI is your, your next move and then once you do have decide that AI is the next move you wanna move into that next level of the AI maturity which Beth, you know, covers really um really well uh you know you go from that exploration to discovery and then you move into experimentation and ultimately enterprise eventually. Um, but what we’re finding is that most folks are not there yet. They’re still very much experimentation early stage, very early stage, um, and, uh, you get to kind of get to see a case study of it through the work that Lawan did at her organization United Way Worldwide. OK, well, we don’t have with us, but you can provide a lot of context, lot of, lot of detail, I just said you could talk. All right, um, are, are we, do you know the you might not be part of what you surveyed, but was there even intentionality around should we, should the should we use question or did it just kinda happen because people started, people started hearing about it using chat GPT. Well, you know, with one of the questions that we did on the survey, we found that like there’s quite a few folks that are using it in what we call shadow use or shadow AI, which is basically you’re using AI but your organization doesn’t know what you’re using. I see. Alright, so that’s not intentionality at the organization level. No, no, no, I would say not, not. Uh yeah, so we really want to encourage the intentionality which is don’t start using the AI unless you all have that collective organizational conversation of is this something that we should be doing? Is it useful? Is there a business case to go with it? Is it relevant? Does it make sense? Is it safe for our organization? does it align with our ethics? And then consider going into experiments. OK, let’s explore that question a little bit uh now in 2025 because I, I suspect at 26 NTC we won’t be asking the threshold question, should we, should we use? So what, what, what belongs in the conversation if we’re, if, uh if we’re at the stage where Well, uh, individuals may be using it, but we don’t know. Or if nobody’s using it and we’re trying to decide enterprise wide, you know, is there not, we’re not even at the is there a use case like but should we, should we explore it? What goes into that conversation? Sure, um. Again that you know, really thinking about the business case. So when you’re having that conversation about should we use AI, then you have to think about what would be the specific usage of it, right? So say you’re the finance team and you’re considering using AI, what would be the benefit of using AI versus doing the doing the the work flow or process that you currently have and you’re thinking of having AI do? so you really. Kind of have to have that conversation like an in-depth conversation about the process that you’re doing right now. Is there anything wrong with it? Are we losing anything? Could we gain, uh, productivity, time in our days and our schedules if we were to move to using AI to do this one process or this one, this one work flow? Then at that point you think about, OK, maybe we do get a benefit out of it now that we get a benefit out of it. What are some of the things that we have to be concerned about now that we have a benefit is it that now we don’t wanna make sure we wanna make sure that any financial information that could be sensitive to any of our donors or their their personal information, do we not want to have that being able to be, you know, used in the AI model or whatever system that we’re using so you know, you, you start with here’s how we do. Things here’s how AI could potentially benefit and then you move into that conversation. OK, if we did, what are some of the risks and concerns really thinking through all of them as much as you can, we know that you can’t think for every single possibility, but as much as you can kind of write it out and map it out as a group with several folks in the room, the better that you are at being able to say yes or no on moving on with AI as that. Potential new solution. OK, and a part of what goes into this intentionality is a usage, a use policy, your, your, you know, you want us to be thinking about ethical uses. OK, uh, what, what are the, what are, what are the ethical concerns? How can you, how can we talk through those? Well, you know, one of the key ethical concerns is that we know that most AI models that exist now, including open AI, were trained on the internet, and we know the internet can be, uh, wildly biased, wildly biased, filled with lots of terrible things. Not only biased but misinformed, misinformed wrong yeah complete nonsense in a lot of cases, um, and so if you’re using these open AI sources that have been trained on the internet, then you have to be really careful about deciding to use it against, say your theory of change. So if you’re an organization that is er. Be uh vulnerable populations groups that are already kind of under attack, whatever the case is, do you want to have AI making or informing your decisions related to work that you’re doing with these vulnerable groups? More than likely no because the AI may choose to do things that are more in line with the group that is. Biased that may have you know may be unethical and so you want to make sure that whatever you’re using the AI to do that it isn’t putting the organizations and the people that you support and serve in harm’s way so really thinking through, hey, if we’re gonna use it in this way, maybe we need to use it in a way that does not put these groups in harm. Maybe we just focus on using it internally like folks do for the meeting. Notes because that’s a very low risk thing whereas if you’re you know input you know uh decisions about whether or not to continue funding an organization or trying to measure or not whether or not their impact is aligning with your organization’s missions and values some of those those questions are not as clear cut as yes or no, whereas an AI that is trained on purely just wanting to see impact, purely wanting to see a return on investment, which is not always the case of what happens in philanthropy. Then you really have to take, take a step back and say is this the most ethical decision to go forward? Could we be putting organizations in harm? Now you can control what a model is trained on, yes, but that requires something proprietary, right? You have, you have to pay a developer to, uh, to create that. I get I don’t know it’s called a small language model. I don’t know what it’s called, but something that’s trained only on your own data, but your own website, maybe your own documents that you that you provided, but that, that requires a fee and a and a developer. Exactly, it it can it can cost, it can be expensive. The other option is if you don’t want to go the route of creating your own AI you do a paid version because we know the free versions of AI specifically I’ll talk about open AI there’s not a whole lot of freedom or flexibility in turning off the settings to prevent it from training the model on the data that you input. And so in that case you definitely need a use policy because some folks would probably just be like I really need to you know analyze all of this data on all of the groups that we served in this, you know, community that is already really, you know, under attack or potentially in in harm’s way and then now you’re putting that information into the AI to have it, you know, into the free AI to start doing it’s now. and now the AI has all of these people’s information and can now use it to provide it to other people who may look them up or want to find data on. That’s you’ve you’ve shared data that it’s gone. I mean it’s yeah yeah yeah there’s no control. So yes, enormous intentionality, care, um. And what if we don’t have a, you know, we don’t have a, a chief technology officer, chief information officer, you know, it’s an executive director, CEO, and, and maybe decent sized staff. I don’t know, 35, 40 people, but they still don’t have a chief technology officer. How do we, how do we uh ensure the intentionality and care that you’re, that you want us to? Yes, um, there’s a couple of ways, and I think oh good, I think at the core of it you don’t have to have a CTO and even yourself you don’t have to be a technologist. I would never classify myself as a technologist, but we can, there’s ways to find training. There’s plenty of training and 10 it has fantastic training for AI certifications for professionals in in the nonprofit sector, um, and I’d love to share that and 10 and tag are teaming up and we will be offering one for philanthropy professionals very soon. And so these are opportunities, a very, you know, relatively easy ways for people who don’t have that technical background to learn about the AI itself, get themselves familiar familiarized with, you know, what they need to be doing to protect themselves and their staff, ways that they can start to experiment in a safe, you know, safe space, um, so and there’s plenty of also free tools, free education. I will, you know, even I, even though I’ve talked. About OpenAI a lot. OpenAI just announced their OpenAI Academy which has all free resources and tools for learning how to utilize AI for anyone and so there are plenty of free resources out there and people online, you know, uh, there’s plenty of folks on LinkedIn that I see on a regular basis that are sharing information and providing some guidance for nonprofit leaders as well as, uh, folks. That are just not technically inclined so there’s ways that you can kind of upskill and train yourself to understand how to use AI even if you don’t have that technical experience in house. Say a little more about this partnership, can you uh and it’s technical association of grant pardon mechology Association of grants thank you um. Yeah, so I don’t have a whole lot of details to share, but essentially if you’ve, if you’ve used any of the great training and certification resources on the N10 website, we are essentially trying to make a parallel version of that same professional certification for nonprofit leaders using AI for. Our foundation leaders and so uh you can expect really a kind of a similar learning process but however it’ll be tailored to some of the different functions and needs that we find at the philanthropy you know at foundations versus what you would see at a traditional nonprofit. OK, so I’m sorry, it’s intended for professionals I should say. Um, Alright, what, so thank you. You know, that’s important ethical considerations, um, anything more on ethics because, uh, then I I want to talk about the policy, what belongs in your use policy, but is there more about ethical concerns? OK, OK, OK, enormous. I mean if you, if, if you’re exposing your data. And, and it’s gone. It’s, it’s out there like you said, right, um, our use policy that, uh, only 13, 30% have, although 80% are using AI. What goes into this use policy? The use policy essentially just outlines what you and your team should be thinking about before you ever use any AI, so. It’s kind of that no go or go kind of conversation so if it’s sensitive data, if it’s information related to any of your members that you just wouldn’t want anyone to have outside of your organizational members probably wouldn’t want to put it into an AI system so it just kind of outlines, you know, essentially guardrails for for teams and and staff to understand how to best utilize it. And I think some folks are also, you know, thinking about the environmental impacts of using AI are really now making sure that their data use policy or the AI policies are also, you know, having folks be ethical about how they’re using when they’re using AI right? so you know if it’s to do something that could take you probably about the same time that the AI does, don’t use the AI um if you’re just, you know, just tossing anything, any old thing and they’re asking questions all day probably also not a very useful. Use good use of AI you really wanna think about AI very strategically and intentionally, right? You wanna make sure that if you’re going to the AI, it’s for something that you know it’s gonna save you significant amounts of time. One of the things that I often will use AI for is drafting, you know, large descriptions for events. That takes me sometimes hours if I give it to AI, I can do it for me in seconds and the key to descriptions of events, yes, like, so we have webinars events that we have on our website, yeah, so you know I, I, I, I don’t wanna sit there talking about all the learning that you’re gonna get out of it and the objectives and this and that and so AI, I’ve trained, I have like a GPT that is based on kind of like my voice that I provide it like hey here’s the prompt, here’s what I’m kind of looking for. It provides me a draft and then I use that draft and I manipulate it how I want. Um, and so you really wanna make sure that you know when you’re prompting the AI or you’re using the AI, it’s they’ve measured it. I think one prompt uses as much energy. I think it’s like an entire city like it’s crazy. It’s like like it, I, I don’t use my quote me on that, but it’s enormous. There’s quite a bit of energy, and I can actually actually share a link to um one of the stats that came out about it. There’s a researcher that’s been sharing a lot about it, um, and she was just interviewed by, uh, I believe it was Doctor Joy Bullumwini on, uh, by the, um, the. AI justice uh group that she she leads, um, and so there’s a lot of it there’s a lot of energy being used so if you’re gonna use it, you wanna make sure that it’s for something that you don’t need to, you wanna learn prompting good prompting, so you can get what you need out of it and then you can make, you can, you know, refine it and make it better. Sometimes you may have to go back in and ask the AI to refine, you know, what it did, but you really do wanna keep it to a minimum. You don’t wanna be using AI. Constantly because the energy use and the impact on the environment is extreme extreme that gets over to the ethical concerns as well exactly because it’s yeah so yeah you’re you’re just really um basically telling your teams here’s the here’s what we expect out of you when you’re using AI and these are the things that could result in consequences if you don’t follow this policy OK um. What else, anything more about the policy, what, what, what belongs in there? Um, You know, I think the the key things is like you know making your team’s aware of the types of AI that are provisioned because that’s another thing some organizations have taken the decision to block certain AIs that they don’t want you using or even turning off certain AI functions in their uh current tech stack. So, uh, you wanna make sure that it’s really outlined very clearly the types of AI that are in use and also it may, you may wanna include something in there about how you, uh, communicate your use of AI to your teams or other people outside of your organization so. Kind of a, a nice, nice little bucket of what’s internal external, and then also where can you go if AI and where should you not go disclosures to the public um why would there be some uh some platforms or that are that are ruled out? Well, because You know, one of the things that I’ve seen some members talking about within, you know, the tag space is that there are some AI that do not allow you or some systems that do not allow you to turn off the AI function meaning that you don’t have any control of how that AI is taking your data that you have in that tech stack or that tech tool. Oh, you don’t have control no yeah and in in fact there was actually a conversation about a specifically a DAF uh platform that actually. Made this clear to many many many of our members who use it and so that is something that you really have to be concerned about is do you have any level of control if you don’t have any level of control and how the AI is using your data in that system there are organizations that would likely say this is a this is not a system that we would allow you to use. OK, it’s a good example. Um what else uh came out of the session? We still have a couple more minutes together. What else did you talk about in the session that uh that you can share with us? You know, one of the great things that we did was we did these scenarios, um, that Beth Beth put together about, you know, what are some of the things that you would say if you’re in a situation when where, you know, say for instance, uh, your organization is really excited about using AI they wanna jump head first and they just wanna start using AI without, you know, and and they they basically just want you to start rolling it out and get your teams on board. Um, and so in that scenario we really talked through all of the processes, you know, first of all, that first conversation that we talked about, like, should we even use AI that didn’t happen, so that needed to happen. The other part is also, you know, how fast do we wanna roll things out? What are some of the different change management principles that we should be thinking about as a team that could make AI adoption more beneficial and successful so really, you know, starting slow but really starting at the very beginning of like should we or should we not like that should be your because truthfully many organizations do not need AI. It’s true. I mean, it’s just the reality. Some organizations will never probably need to use AI, and then there’s a whole lot of them that probably will. So that question of like, should we do it has to happen first, um, and I think if you’re doing it on your own as a rogue, stop, do it on your own time. You want to practice on it, do it after after hours on a weekend. Exactly, exactly, not on our computers, not on our sisters. Yeah, yeah, if you, and that’s actually one of the things that, um, you know, we’ve seen a lot of our members and foundations, and I think Beth has also seen with, you know, some of the work she’s done in the in the sector is that a lot of foundations are now trying to just get to the staff and say, hey, look, we know that you’re using, can you just tell us and try to make that trust, build that trust with each other and I think that’s gonna be really a good way to help prevent a lot of the issues. Alright, let us know, but then stop. No, there’s no repercussion for reporting yourself, but only, well, only after what you report after the report date, you’re liable. All right, stop it. Exactly. OK, going rogue. All right, um, anything else? Uh oh, questions, any, uh, provocative or memorable questions that came. From the audience I’m trying to think. Um, No, well, you know, the one that had come up was just, uh, you know, there was a, there was someone at the front that had asked about, you know, AI hallucinates, and I was, and, you know, should you hallucinates, yeah, and she and the, the person was basically saying, you know, be careful using it as an organization because it could give you answers that are just factually wrong and so you know our response was like yeah you’re right AI does hallucinate but that’s why it’s incredibly important and I. And I didn’t even say this myself, but at the beginning, which is if you use AI, you always wanna make sure that it’s for something that you have a certain level or high level of expertise or knowledge about. So you know if I’m asking you to write descriptions for me, I know about the event details so that I’m not just gonna let the AI write a description and let it go and put it on the website. Yeah, that sounds good. I’m gonna put it no you review it, you make sure that. The details it’s including are correct. If there’s any statistics or numbers that are being used, you can go and verify those data. So if you’re ever using AI for anything, you should always have a human in the loop. There should be someone that’s able to verify the information, especially if you’re someone that’s not knowledgeable in that specific thing that you ask AI to do. You need someone who is either that or it’s gonna be written at such a high level that it’s maybe that has no value. Exactly, exactly. All right, how about we leave, are you OK leaving it there? Oh, you feel like we covered this? I think we did. OK. All right. All right. Go the Abuno. Euanu Gozi Ebo. Director of programs at Technology Association of Grant Makers. Gozi, thank you very much for sharing all that. Thank you for having me, Tony. My pleasure and thank you for being with Tony Martignetti nonprofit radio coverage of 25 NTC where we are sponsored by Heller Consulting technology services for nonprofits. Next week, 225 NTC conversations to help your fundraising events. If you missed any part of this week’s show, I beseech you. Find it at Tony Martignetti.com. And now the donor box is gone, I miss our alliteration fast, flexible, friendly fundraising forms. Uh, I miss that. All right, well, I am grateful to Donor Box though for 2 years of sponsorship, very grateful, grateful. There’s another gratitude. I’m grateful to Donor Box. Now that they’re not a sponsor anymore, I’m grateful to them. No, I, I’ve been grateful. I just haven’t said it. OK. Our creative producer is Claire Meyerhoff. I’m your associate producer Kate Martignetti. The show social media is by Susan Chavez. Mark Silverman is our web guy, and this music is by Scott Stein. Thank you for that affirmation, Scotty. Be with us next week for nonprofit radio, big nonprofit ideas for the other 95%. Go out and be great.