Tag Archives: ai bias

Nonprofit Radio for June 5, 2023: Artificial Intelligence For Nonprofits

 

Afua Bruce, Allison Fine, Beth Kanter & George WeinerArtificial Intelligence For Nonprofits

We take a break from our #23NTC coverage, as an esteemed, tech-savvy panel considers the opportunities, downsides, potential risks, and leadership responsibilities around the use of artificial intelligence by nonprofits. They’re Afua Bruce (ANB Advisory Group LLC); Allison Fine (every.org); Beth Kanter (BethKanter.org); and George Weiner (Whole Whale).

Listen to the podcast

Get Nonprofit Radio insider alerts!

I love our sponsor!

Donorbox: Powerful fundraising features made refreshingly easy.

 

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

Transcript for 643_tony_martignetti_nonprofit_radio_20230605.mp3

Processed on: 2023-06-02T16:46:29.571Z
S3 bucket containing transcription results: transcript.results
Link to bucket: s3.console.aws.amazon.com/s3/buckets/transcript.results
Path to JSON: 2023…06…643_tony_martignetti_nonprofit_radio_20230605.mp3.774321359.json
Path to text: transcripts/2023/06/643_tony_martignetti_nonprofit_radio_20230605.txt

[00:04:19.33] spk_0:
And welcome to tony-martignetti non profit radio. Big non profit ideas for the other 95%. I’m your aptly named host of your favorite abdominal podcast. Oh, I’m glad you’re with me, but you’d get slapped with a diagnosis of algorithm a phobia. If you said you feared listening to this week’s show Artificial Intelligence for nonprofits, we take a break from our 23 NTC coverage as an esteemed tech Savvy panel considers the opportunities downsides potential risks and leadership responsibilities around the use of artificial intelligence by nonprofits. There are fewer Bruce at A N B advisory group LLC Allison. Fine at every dot org, Beth Kanter, Beth Kanter dot org and George Weiner at Whole Whale on Tony’s take to a give butter webinar. We’re sponsored by donor box with intuitive fundraising software from donor box. Your donors give four times faster helping you help others. Donor box dot org. Here is artificial intelligence for nonprofits in November 2022. Chat GPT was released by the company open AI they’re more powerful, maybe Smarter GPT four was released just four months later in March. This year. The technology is moving fast and there are lots of other platforms like Microsoft’s as your AI I guess the sky’s the limit. There’s Google’s help me, right? And Dolly also by open AI creates images. So artificial intelligence can chat and converse answer questions. Do search, draw and illustrate and write. There are also apps that compose music, create video and coding computer languages. A team at UT Austin claims their AI can translate brain activity into words that is read minds and I’m probably leaving things out what’s in it for nonprofits. What are we risking? Where are we headed? These are the questions for our esteemed panel. Bruce is a leading public interest technologist who works at the intersection of technology policy and society. She’s principal of A N B alpha, November, Bravo Advisory group LLC, a consulting firm that supports organizations developing, implementing or funding responsible data and technology. She’s on Twitter at underscore Bruce Alison. Fine is a force in the realm of technology for social good as president of every dot org. She heads a movement of generosity and philanthropy that ignites a profound transformation in communities. You’ll find Allison Fine on linkedin. Beth Kanter is a recognized thought leader and trainer in digital transformation and well being in the nonprofit workplace. She was named one of the most influential women in technology by Fast Company and is a recipient of the N 10 lifetime achievement award. She’s at Beth Kanter dot org. George wegner is CEO of Whole Whale, a social impact digital agency. The company is at whole whale dot com and George’s on linkedin. Welcome all our esteemed panelists. Thanks, welcome to non profit radio. We’re gonna start just big picture. Uh I’d like to start with you just what are you thinking about artificial intelligence?

[00:05:30.10] spk_1:
That is a very big picture question. What am I thinking about artificial intelligence? I think um there are lots of things to consider, I think first is um all of the hype, right? We have heard article after article whether or not we wanted to, I’m sure about the promises and the potential of chat GPT specifically generative AI more broadly. Um Well, uh you think about some of the image based AI solutions, generative AI solutions that are out there that have been in the headlines recently, of course, as someone who’s started their career off as a software engineer where AI has been around for a while. And so sure, generative AI is a different type of application of AI, but it is building on something that has been both out in the world developed for a while. Pre chat GPT most organizations or several companies just embedded AI into the tools you already use, whether it’s gram early or something, I’m betting ai into their solutions. So what I’m thinking about now is how do we help organizations navigate through all of the hype and figure out what’s real, what’s not real, um recognize where they should lean in, recognize where they can take a pause before leaning in and then of course, underlying it all, how do we think about access, how do we think about equity and how do we think about how embracing AI will change or evolve jobs?

[00:05:59.52] spk_0:
And these just define generative ai for us? So everybody knows what, what we’re referring to and we’re all, we’re all on the same platform.

[00:06:08.78] spk_1:
Sure. So, generative AI is where it is essentially looking at a large model. Chat gps specifically uses a large language models. So lots of text and looks at that and then gives you what is statistically sort of the next uh most reasonable or probable word based on a prompt that you give it. So developing the recommendations as you go along,

[00:06:35.79] spk_0:
Allison, please. Yes, big picture.

[00:08:08.00] spk_2:
Well, a few adjust said it beautifully that this isn’t a brand new idea, although we are in the next chapter in terms of advanced digital technology. I think organizations tony need to get their arms around this right now. Ai before AI gets its arms around them and their organizations, Beth and I started to look at AI about five years ago with support from the Gates Foundation and the promise of it is that AI can eat up the road tasks that are sucking the lifeblood out of so many nonprofits, staffers, they are drowning in administrative um tasks and functions and requirements that AI can do very well in fundraising. It might be researching prospects, taking the first cut, communications with donors not sending it out, just taking the first cut, helping with workflow, helping with coordination. Um And the responsibility is for organizational leaders, not line people and not tech people, but organizational leaders to figure out where the sweet spot is what we call co body between what humans can do and need to do. Connect with people, solve problems, build relationships and what we want the tech to do mainly rote tasks right now. So understanding ai well enough tony to figure out where it can um solve what we call exquisite pain points and how to make that balance between humans and the technology is the foremost task for organizations right now.

[00:08:32.35] spk_0:
Death.

[00:10:18.39] spk_3:
Great. So Alison and Noah said it so well. So I’m just going to actually build on it but go into a specific area that where that is kind of the intersection between ai and workplace well being and kind of the question, you know, well, ai fix our work. Um can it transform like the work experience from being exhausting and overwhelming to one that brings more joy that allows us to get things done efficiently but also to free up space to dream into plan? Um And or is it going to be a dystopian future? I don’t think so. Um And by dystopian related to jobs I’m talking about kind of, you know, we’ll get rid of our jobs like who, who will lose out. And um just a week or two ago, the World Economic Forum released a report that predicts that nearly 25% of all jobs will change because of generative ai and it’ll have a, you know, a pronounced impact by displacing and automating many job functions um that involve writing, communicating and coordinating, which is, which are the things that chat GPT can do so much better than previous models. Um But it will also create the need for new jobs, right? I heard a new job description recent, a prompt engineer. So somebody who knows how to ask the types of questions of chat GPT to get the right and most accurate and high quality responses. And I think I’m building on what Alison said about co body. I think this is the future where AI and humans are complementary, they’re not in conflict and it really provides a leadership opportunity to redesign our jobs and to rethink and reengineer workflows so that we enable people to focus on the parts of the work that humans are particularly well suited for. Like relationship building, decision making, empathy, creativity, and problem solving. And again, letting the machines do what they do best but always having the humans be in charge. And again, that’s why Allison and I always talk about this as a leadership issue. Not a technical problem.

[00:10:50.46] spk_0:
Leadership, right? Okay, we’ll get the leader responsibilities. George, what are you thinking about ai

[00:11:30.47] spk_4:
hard to add such a complete start here. But I would say that just because this is a fad doesn’t mean that’s not also a foundational shift and the way we’re gonna need to do work and how leaders are gonna have to respond. I also just want to say like right now, if you’re listening to this podcast, because your boss forwarded it to you saying we gotta get on this. I understand the stress you’re under. It is really tough, I think right now to be in the operational layer of a nonprofit doing today’s work expecting to make tomorrow’s change. So stick with us. We appreciate you listening.

[00:12:03.93] spk_0:
Thank you, George. Like happening to the co host role, which uh which doesn’t exist so careful care. Watch your step. Let’s stay with you, George, you and I have chatted a lot about this on linkedin. Uh use cases. What, what uh what are you seeing your clients doing with ai or what are you, what are you advising that they explore as their um as they’re also managing the stresses that you just mentioned?

[00:13:00.00] spk_4:
Well, right now we’re actively custom building AI is based on the data, voice and purpose of organizations that we work with. One of the concerns that I have is that when you wander onto a blank slate tool, like open ai Anthropic Bard, you name it, you’re getting the generic average as of who pointed out the generic average of that large language model which means you’re going to come off being generic. And so we’re a little concerned about that and are trying to focus our weight on how you tune your prompt engineering toward the purpose of the organization. We’ve already mentioned, grant writing, reporting applications, emails, appeals, customization, social post, blog, post editing. It is all there if you’re using it the right way, I think.

[00:13:22.32] spk_0:
And that gets to the, the idea of the prompt engineer to that, that Beth mentioned what, what you’re so avoiding that generic average with sophisticated prompts. George.

[00:13:47.96] spk_4:
Absolutely. Yeah. I mean, we jokingly call it the great jacket problem where I showed up to a conference and I was wearing the same gray jacket as another presenter and I was like, we both walked into a store and we both thought that the beautiful gray jacket we put on was unique and that we would be seen as such for picking out such a great jacket. When in fact, when you go in to a generic store and get a generic thing, you get a generic output. And my concern is that without that leadership presence saying, hey, here’s how we should be using this with our brand tone voice and purpose that every single new hire out of college. We’re running into the social media game. Beth has already played this game, Allison, we’ve already played this game where we handed the intern the Twitter account because they used it in college. We’re gonna just replay that again and I’d rather just skip that chapter

[00:14:22.42] spk_0:
and that we’re going to get into this too. That, that generic average also has biases and misinformation. False. Well, they’re not false, false information. Um How about you? What are you seeing your clients? What are you advising usage wise?

[00:16:24.89] spk_1:
A couple of things. So, first, I think Allison touched on this as well is that you can sort of take a breath. You don’t have to embrace everything all the time for everything. I know it can seem right now that everyone’s talking about generative ai and how it’s going to change your world. Um But you can sort of take a breath because um as I think Allison and Beth both mentioned, right, the technology is only good if it’s working for our mission, if it’s working for organizations. So really taking the time um as a leadership team to really be clear on what you want to do, what differentiates your organization and make sure your staff is all aligned on. That is the first thing that um advise organizations to do. The second is to think about then the use of AI both to help your organization function and deliver it services out in the world. But then also to think about how it impacts your staff. So I think sometimes we can get caught up in, we’re going to use A I hear it’s going to like, you know, we’ll be able to fix all of our external messaging will be able to produce more reports, will be able to produce more um grant applications, all good, all valid. But remember also, your staff has to learn how to use it and staff has to learn how to make the prompts. Your staff also has work internally that they are doing that. Perhaps AI could be used to help speed up the their task and free up their time and their brain space to lean into what humans do best, which is some of the relationships and having empathy. So thinking also not just about how AI can help you maybe generate more culturally appropriated images for different campaigns around the world or how generative AI can help you fine tune some messaging or how generative AI can help you better sort of segment and deliver services to, to your communities that you serve. But also how you can use AI to do things like help with notes, help with creating agendas, help with transcripts and more what are some of the internal things to really support your staff that you can, you can apply AI towards

[00:16:48.76] spk_0:
Alison that’s leading right to some of those rote tasks that that you mentioned. Um So I’m gonna put it to you in, in, in terms of uh Kirsten Hill on linkedin asked, what’s the best way for a busy nonprofit leader to use AI to maximize their limited time?

[00:18:49.78] spk_2:
So people are looking for some magic solution here, tony and we hate to disappoint them, but AI is not magic fairy dust to be sprinkled all over the organization. Uh This is a profound shift in how work is done. It is not a new word processing, you know, software AI is going to be doing tasks that only people could do until just now. Right? Any other year going back, um people would have had to be uh screening resumes or writing those first drafts, um or, you know, coordinating internally. And now basically the box are capable of doing it, but just because they’re capable of doing it doesn’t mean that you should, you know, unleash the box on your organization. Our friend Nick Hamlin at globalgiving, a data scientist said AI is hot sauce, not catch up a little bit. Goes a long way. We Beth and I have been cautioning people to step very slowly and carefully into this space because you are affecting your people internally and your people externally, right? If a social service agency has always had somebody answering questions of, when are we open? And what am I eligible for? And when can I see somebody? And now a chatbot is doing that, tony, you have to be really careful that one, the chatbot is doing its job well and two that the people outside don’t feel so distant from that organization that it’s not the same place anymore. So our recommendation is, that’s

[00:18:52.67] spk_0:
a, that’s a potential. I mean, it could, I guess mishandled this could change the culture of an

[00:19:36.78] spk_2:
organization. Absolutely. If you are on the outside and you’re accustomed to talking to Sally, who at the front desk and all of a sudden the organization says to you, your first step has to be talking to this chat bot online. Instead the organization has solved perhaps a staff issue of having to answer all these questions all at the same time. But it’s made the interaction with those clients and constituents much worse. So we need to first identify what is the pain point we’re trying to solve with AI is ai the best solution for doing that and then to step carefully and and and keep asking both staff and constituents, how is this making you feel? Right? Do you still feel like you have agency here? Do you still feel like you are connected to people internally and externally and to grow it from there? There is no rush to introduce AI in everything that you do all at once. There is a rush to understand what the impact of automation is on your organization.

[00:21:00.42] spk_0:
It’s time for a break. Stop the drop with donor box. Over 50,000 nonprofits in 96 countries use their online donation platform. Naturally, it’s four times faster, easy payment processing. There are no set up fees, no monthly fees, there’s no contract. How many of your potential donors drop off before they finish making the donation on your website. Stop the drop, stop that drop donor box helping you help others donor box dot org. Now back to Artificial Intelligence for nonprofits with fewer Bruce Allison. Fine Beth Kanter and George wegner. Beth, I see you taking copious notes. I think, I think there’s a lot you want to add.

[00:23:39.85] spk_3:
Oh, there’s so many good points made and I was taking a lot of notes because like nowhere to jump in. Um So a couple of things, uh George said, uh we, we did the social media thing and we turned it over to the intern. Let’s not do that again, but I’m not sure that’s gonna happen because with social media adoption, if we think back, uh you know, the dawn of social media started in 2003, it really wasn’t until six or seven years later. And I remember it quite distinctly when the chronicle, Phil apathy and organizations were really embracing it. There was a lot of skepticism because social media adoption was more of a personal thing because it started as the individual, it wasn’t immediately brought into the workplace. Um And I think chat GPT will be a little bit different because the benefit there is, you know, the sort of the allure of efficiency saving time, right? And or it can help us raise more money. So I think we might see it develop more quickly in the workplace and if nonprofit leaders are, are part do smart adoption, then there will also be the training uh required and the retraining and the re skilling. And I think for me, the most important thing about this is that it is going to change the nature of our work and that if you just let that happen, you’re missing an opportunity because we have a chance to really kind of accelerate workplace line learning, both, you know, formal and informal to, to re skill staff that in a way to embrace this, that’s not going to cause more stress and burnout. The other thing I was thinking about the great jacket and I love that um Metaphor George, I love it. Um In that, you know, if nonprofits are turning to and buying the $20 a month subscription for Chat GPT, they’re getting the Great Jacket version and missing out on the opportunity to really train it. But the other hand, if they’re just going without an organizational strategy, are they being trained in, are they put entering confidential information into Chat GPT? Are they using their critical thinking skills? Because we know that uh chat GPT can hallucinate and pick up crap? Right? Are they really, you know, are they, are they doing that? Like, are they just saying, write me a thank you letter for this donor versus write me a thank you note in the tone of in a conversational tone um that recognizes this donor, you know, quality blah, blah, blah, right? And um and then go back and forth and refine a draft. So, so there’s a piece of like um uh I guess technical literacy that has to be learned and that’s like the technical problem. But then there’s also this whole workplace learning and workplace um uh you know, reengineering of, of jobs and bringing in new jobs and different parts of descriptions that also need to take place as well. So we’ve got to prepare the organization’s culture uh to adopt this in a way that is ethical and responsible.

[00:24:07.24] spk_0:
George you feel any better.

[00:25:12.72] spk_4:
I’m not sure how I felt to begin with, but the uh the, the piece to add on as a nuance, there is not just the generic output but the normalization and ability for people to identify A I created content is going to explode. What does that mean if I were to show you a stock photo right now? Versus when I took on my phone, it would take you 0.5 seconds to be like, yep, stock photo, stock photo, stock photo. And we have all seen the appeals that go out with generic Happy Family with Sunset and background. And I think what’s going to happen is the text that is generated by folks that are using gray jacket G P T s is that your audience is going to see it, identify it and shut it down mentally. It’s like driving past that billboard or that banner ad. It’s going to be a wash. It may seem unique to you. But I think, uh, I think that’s another thing that we’re going to see happen. I just want folks

[00:25:13.82] spk_0:
to know, okay, I just want folks to know that that Great Jacket is a real story. You, you and you and another guy did show up with the exact same jacket

[00:25:21.64] spk_3:
at some point and 10 conference, wasn’t it in New Orleans?

[00:25:24.91] spk_4:
It was, it was a fundraising uh fundraising conference. And actually the other guy’s name was George. So there was two Georges to great jackets. I felt very um silly.

[00:25:38.76] spk_0:
Yeah.

[00:26:29.31] spk_2:
So, um the ultimate R oi Beth and I feel and we wrote about in the smart non profit is what we call the dividend of time that is to use AI to do those rote tasks that I talked about a few minutes ago in order to free up people to do human things. And George the opportunity isn’t we hope to send out more messages or to be, you know, continue down the transactional fundraising path. The opportunity is to use your time to get to know people and to tell them stories and to listen to them. So with or without A I organization stuck in that transactional hamster wheel tony for raising money. And if they can’t get out of that AI is definitely not going to help them. The opportunity here is to move that entire model into the past and say we’re going to create a future where AI gives us the time and space to be deeply relational with people. That’s the opportunity.

[00:27:17.67] spk_0:
Well, I’m gonna come to you in a moment and talk about how we can prevent the, this generic average, this gray jacket uh from taking over our culture. But Alice and I just want to remind you that when I had you and Beth on the show to talk about your book, The Smart non profit, I pushed back on the dividend of time because it feels like the same promise that technology has given us through the decades. And I’m not feeling any more time available now than I did before I had my, my smartphone or um whatever, whatever other technology I’ve adopted that was supposed to have yielded me, yielded me great, great time. I don’t, I don’t, I don’t feel any, any greater time.

[00:28:42.12] spk_2:
I don’t believe that that was the promise before. And certainly what we found with the last generation of digital Tech tony is that it made us always on and everything became very loud and very immediate. No question about it. And this next chapter in AI is not guaranteed to give us time. What we’re saying is there’s an opportunity to work differently and to create this time if leaders know how to use it. Well, that’s the big if, if we’re just going to sit back and said late, let’s ai supersize our transactional fundraising and send, send everybody 700 messages a day because that’s worked so well said very sarcastically then no, it is not going to make us any free up any time. But what we are saying is this technology has the capacity to do all of that work that is sucking up 30 40% of our time a day and we could be freed up. But only if we use it smartly and strategically,

[00:28:51.05] spk_0:
how about, you know, how we can help prevent these generic averages with their biases and marginalization of already marginalized voices. You know how and, and just from the fear of taking over the institutions, culture, how, what are the methods to prevent that?

[00:33:20.42] spk_1:
Um Sorry, I think I would start with an analogy that I’ve used before. That technology is not a naturally occurring resource. There’s no like river of technology that we just walked down to and scoop up and now we have technology and it immediately nourishes us to some of what Alison was just mentioning. Um in order to actually use AI effectively, it takes intentional management, it takes intentional decisions about how to use it when to use it and why to use it. And so that definitely applies when we think about how do we differentiate, differentiate ourselves even as we use AI and also how do we make sure that we then are being intentionally inclusive? Um I don’t know of any technology that just by happenstance has been inclusive. Um And so it requires intentional decisions. So some ways that bias can appear in generative ai systems are with some of the, the coding that is done inherently with some of the data sets that are used. Even with large language models, they reflect right now every on the internet. Um I know a lot of great people on the internet, there’s a lot of things on the internet that do not align with my values, um or even my actual lived experience. Um And so how do we then think about sort of combating that? So I think one, we’ve already touched on prompt engineering to make sure that we are asking it the things that we want to get back if you ask chat GPT, for example, um to describe what, what are risks with chat with generative AI will give you one list. You refine that prompt to include specifically what a risk with chat with generative ai including or specifically affecting women or people of color. It will give you a more refined response. Chat GPT a month ago. If you asked it, the doctor and nurse were arguing because he was late, who was late. It would tell you the doctor was late. He asked the same question but said because she was late, it would tell you um it was the nurse that was late, that now has changed because the people who are programming to GPT have manually made those changes. So as we think about how we can use it, it is through some of the software that we’re building on top of it, some of the plug ins that you decided to take advantage of, to not take advantage of how you might be able to use it on your own sort of proprietary information with the right parameters in place to keep it on your, keep it with your own data in ways that make sense for your organization there. Um I think it’s an opportunity for funders to fund the creation of new data sets or fund the creation some more responsible plug ins or fund um you know, new open source developments as well. So I think that’s an exciting play there. Um And then I think also there is an opportunity to use chat GPT or sorry, generative AI in ways that really do enable more representation. Um Working with someone who is um an advocate for women’s rights in India. We’re talking through ways that she could more quickly generate posters and informational materials using generative AI for both images and text for different places on the subcontinent that she couldn’t physically get to. Um And that she didn’t have talent on the ground to get to. That is different though I’ll say from the announcement from LEVI a couple of months months ago that they were going to use chat cheaper generative AI to create a diversity of models rather than hiring people or buzzfeed recently saying um shareholders meeting that they would use AI to help create authentic black and Latino voices presumably um instead of talking to actual authentic black. So um they didn’t, she was a statement a day or two later saying no, no, no, that’s not what we meant, we meant something else. Um But, but my point is there are ways to think about how you can use generative ai as a nonprofit organization to better reach and connect. But also make sure that you are still doing it in a way as I think all of us have said so far, that really does center people that does center communities and isn’t trying to necessarily replace those relationships.

[00:34:11.43] spk_0:
Beth our our master trainer, I see a need for training for leaders for for for users. I mean, I’m not seeing any of this happening now, I’m not seeing how to use, you know, but is there, is there a training issue here for, for people at all levels? You’re sorry,

[00:35:55.78] spk_3:
sorry about that. I don’t want them back. Absolutely yes. But we, I make a distinction between training and learning. Alright. So training professional development, formal ways of learning particular skills and those might be more around the technology, literacy, literacy skills like, you know, prompt engineering, for example. But then there’s also the informal piece of learning which is informally uh discussions with different teams about how it’s changed their job, right? Or uh or, or reflecting on a job description or, or job workflow that needs to be changed and then sharing that with other departments. Um So, you know, so there’s kind of like workplace learning that is connected to with the workplace culture. Um and which in some ways has nothing to do with the technology. It’s kind of like as a result of the technology. Uh what do we now have the possibility to do because we have this freed up time or because we have not spent so much time staring at a blank screen and not doing anything because of blank screen syndrome. You know, chat DBT has like helped us get to that first draft quicker and maybe human editing has done the second and the third, third draft. Um uh and we’ve gotten a better result. Um And that has improved our end results with our fundraising goals or whatever we’re trying to accomplish. Um you know, what comes next. Um So those are the pieces of learning that, um you know, that haven’t been possible a lot of times in nonprofits because we’re so busy trying to get the stuff done on our to do list and, and or were being overwhelmed. So, um so what, what is possible now that we’re able to do our jobs better and we’re able to take on these different tasks. How can we improve our results? Um And outcomes,

[00:36:24.68] spk_0:
George, how are you teaching your, your clients who are hopefully translating that into learning about using non using generative ai are you, are you talking directly to leaders? Are you, are you training users on, on better like skills like better prompting? What’s what does teaching training look like for you?

[00:38:14.82] spk_4:
I mean, we’ve done our best to put out as much free content as possible, first and foremost, to try to, you know, raise the tide of understanding for nonprofits and we’re putting all of that out as fast as I can think to create it internally. We’re having weekly training sessions on use cases for us and we’re actively building and improving on client custom created GPT uh endpoints that pull their data in and their purpose in. I want to go back though to Beth talking about what actually, you know, education and this looks like and we could train you on how to swim over this podcast. We could talk about all the things you need to do. Like I’m watching my daughter learn to swim. There’s no storybook, there’s no encyclopedia, there’s no webinar that you could watch that would teach you how to swim. There is a fundamental component of this. If you jumping in the water and interacting with the tool learning, coming back, realizing where it frankly lies to you. As I am really happy, we have all pointed out where it hallucinates where it’s helpful and where the opportunities are. And by the way that’s gonna change next month and so it’s not a single point in time and, you know, this, you, you’ve been an engineer for, you know, a while and seen it’s like the, you know, the code you played with, you know, a month ago, it’s just different tomorrow and what’s possible is different tomorrow. Um On the other side of the coin, I’m a little concerned, you know, we have gone through and maybe you’re getting anxiety when you hear yet another tool. Yet another tool. There’s over 1600 tools listed on just one site, future tools dot IO. And there’s going to be even more tomorrow. There are 95% of these things that are just going to be gone within a year. So I’m also cognizant of the rabbit holing that can happen in this.

[00:41:48.75] spk_0:
It’s time for Tony’s take two. I’m doing a Give Butter webinar later this month, debunk the top five myths of Planned Giving. I am especially excited about this one because the Give Butter host Floyd Jones and I are gonna be together co located face to face person to person in person real time. So, uh the energy that he brings and I try to keep things light moving. I think we’re gonna have quite a bit of infotainment on, on this one with Give Butter debunked the top five Myths of Planned Giving and it’s Wednesday, June 14th at two p.m. Eastern time. But you don’t, you don’t need to be there you can get the recording. If you can’t make it live. Watch archive. I used to say that on the show, listen, live or archive now it’s just listen, archive no more live but this is listen, live or archive bonafide. Uh If you want to make a reservation, you go to give butter dot com, then resources and resources and events. Very simple. So make the reservation. If you can join us live, that would be fun because I love to shout folks out and I’ll answer your questions. If you can’t sign up and watch the video, it’s all at give butter dot com resources and then events that is Tony’s take two, we’ve got the boo koo but loads more time for artificial intelligence for nonprofits, I’d like to turn to some of the some of the downsides even more explicitly. So we’re all talking about efficiency and uh the the time time saved the dividend of time. But um at what cost, what potential cost, short term, long term, um We’ve already talked about, you know, they’re being a bias towards dominant voices that are existing, dominant voices remaining dominant. Um For you had a great example of someone in in in India, right? Trying to, trying to represent folks that she can’t get to see. So there, I mean, there’s a potential upside but you know, all this at, at what uh at what potential cost and then there’s, we haven’t even mentioned, we mentioned false information, but in the video realm, deepfakes, video and audio, deepfakes, photograph, deepfakes. Who wants to, who wants. I’m being an egalitarian there who wants to uh launch us into the, the risks and downsides part of the conversation.

[00:41:54.45] spk_1:
I’m happy to start, I’ll say for the record, I am generally an optimist. However, um there, there

[00:42:02.41] spk_0:
are some things uh we’ve taken judicial notice.

[00:44:17.34] spk_1:
Thank you. Thank you for the record. It has been noted, I appreciate that. Um So again, just reiterating what we’ve already said, intentionality really matters here without intentionality. Um Things can go really wrong because General Ai has the ability to hallucinate. Um And because General Ai is reacting to what data already exists, recognize that sometimes the things that decisions that we can make based on that could be really wrong. So um if you can think through and imagine how Ai might be used to help with hiring processes, um even with a more standard version of AI, for example, Amazon a few years ago, put some work into developing a system that would identify people who were best poised to be managers and succeed in senior management at Amazon. The results of the AI show that white men from particular schools were best boys. Is this actually true based on skills? No, but it was based on the data that they had, which was trained on their internal data, which showed being a company and Northwest, it just reflected what their practices had been in some of the things they changed. Amazon end up not rolling that out because they had a human in the loop there that sort of looked at what was coming out and showed that in reviewed and determined this is not actually in line with our values is not in line with what we’re trying to do. Um So I think uh pushes to completely remove a human from that decision making loop are ways that generative ai can go really wrong very quickly in organizations think we’ve already started to talk about some of the bias that can appear in results. Um give the example already with gender that is true for um along a number of other demographics as well. And so not correcting for that or recognizing even that even with these large language models, even with something that’s trained on the internet, um not everyone is represented there. And so making a lot of decisions based on what’s there may not give you and may not give you the most inclusive and equitable response that you want. I think those are two ways that this can go wrong.

[00:44:33.58] spk_0:
Allison anything you wanna, you wanna add to this? Sure.

[00:45:47.94] spk_2:
Um So the AI revolution is far bigger than Chad GPT in generative AI AI is going to be built into every software product that an organization buys in. Finance in hr in, you know, customer service in development. Those products were created by programmers who are generally white men and then trained on historic data sets, which as you just mentioned, are deeply biased as well. So you have a double whammy that by the time the product gets to an organization, it has gender and racial bias baked right into it. This again is why it’s a leadership problem, tony, we need organizations to know what to ask about these products, to ask how it was built, what assumptions were made in building and how it was tested for bias, how you can test for bias before that hr software program you just grabbed through into your mix is screening out all of the black and brown people applying for these positions. So these are real everyday concerns about integrating AI into work and why we need to be careful and strategic and thoughtful about how we’re integrating it into organizations.

[00:47:32.67] spk_3:
Yeah, Beth, I really want to pick up on a point that a film made about um the concern about not having human oversight at all times. And one of my favorite examples of this comes from Kentucky Fried Chicken in Germany. And um they were using a generative ai tool that was um that could develop different promotions that they could put out there. And the data set that it was using was a the calendar of holidays in Germany and of course, then some promotional language like 5% off cheesy chicken, right? Um And they got into trouble because there, there was a lot of social media messaging that was just put out their generated by the generative ai and the message was um happened on November 9th, which is the anniversary of Kristallnacht, which is considered the beginning of the Holocaust. And the, and the promotion was, you know, enjoy $5 off a cheesy chicken to celebrate the night of broken glass. And, you know, and so I think that the issue is, is that we begin to put so much trust into these tools that we think of them as human or the equivalent of human intelligence. And that, you know, we just take it for face value and we don’t have that human intervention with those critical thinking skills. And um and that’s where harm could be done um to the end users. Um So I, I just really think it’s comes back to that co batting example that we’ve talked about and again, the, you know, the need for leaders to really be reflective and strategic in how they executed. It’s not just about learning how the right prompts to ask GPT chat to get a particular output.

[00:48:10.15] spk_0:
There was another example of that uh at, I think it was at a college. Uh they put out a press release and at the bottom of the email, it said, you know, generated by chat GPT or something. I mean, so a human, you’ve all talked about humans being involved with the technology you know, a human hadn’t even scanned it to, uh, to know to take that, that credit line off the, off the email. So, you know, like blind usage.

[00:48:58.01] spk_3:
That’s an interesting thing to, to think about. Like, um, do I disclose, like, if I, if I was writing a post an article and I went to GPT chat to, like, because I needed to get it from 1000 words to 750 words. And I could ask it, you know, too long. Didn’t read standby for some text, please reduce from 1000 words to 750 words um which I actually have used, but I don’t take a cut and paste and I actually sat and compared what it, how did, how did it change the language? And one thing I did notice is it took out any sentences that had a lot of personality to them and it transformed it into this very generic kind of text, you know. So again, it requires a human editorial oversight. If you will,

[00:49:20.80] spk_0:
George, you want to talk about risks downsides.

[00:50:17.62] spk_4:
Yeah, I would say this is more of a bigger picture risk that I see as the net result of we’re talking about GPT tools being built into everything we use. One is that, you know, if, if you were using it blindly, you were the product you’re handing over information. Uh There was a actual open ai hack. Well, a hack or data leak where all of the conversations that were being uh stored on the side were accidentally shared and open. And so I think that’s something to be aware of bigger picture. I am watching very closely. The impacts of chat, first search chat, first search bard and being barred is Google’s AI that is now rolled out out of their private into a public beta is going to destroy organic traffic for information based searches to nonprofits. Inside of what I believe is the next two years. The second order effects of that are so many that we would need several podcasts to understand, but I’m no longer telling clients that we should expect more organic traffic next year. Versus this year.

[00:50:57.37] spk_0:
You experienced this with your own with the whole whale site. You, you had, you had, you did a search and it gave and the search tool gave you back some of your whole whale content. It did credit it. But then your concern was that that credit was purely optional, but right, you, you experience this with your own, with your own intellectual property.

[00:52:14.75] spk_4:
I’m watching it across a lot of, you know, we get roughly 80,000 month in terms of monthly users looking for information that we put out there. I test what that looks like when I do similar searches on bing as well as perplexity dot AI and now barred. The thing that scared me the most is that bar just sort of decided not to even bother with the footnotes in its current iteration and just gave the answer to one of uh several articles that dr significant traffic to our site. There are two types of traffic that S C O is providing. It is informational and then transactional. And so for the informational, I would encourage your organization to do some of these sample searches and begin to plan accordingly. And it makes me a little sad that that part of nonprofits ability to be a part of the conversation when somebody’s asking for, I don’t know information about prep and HIV information or something about L G B T Q rights history doesn’t get you engaged with the organization. It just gives you the answer and there’s something missing there that I think is going to have negative downstream impacts for social impact organizations. And

[00:52:22.87] spk_0:
you expect to see declines in there

[00:52:38.37] spk_4:
will be a decline, significant declines. And that’s concerning to me because it’s cutting non profits out of the conversation that they have traditionally been a part of when people are looking for information. And especially in a time where we’re going to have a rapid increase in disinformation because these tools can be used to create that at scale.

[00:54:19.95] spk_0:
We already have enormous disinformation. It’s hard to imagine it growing exponentially or logo rhythmically. Um I’m interested in what you all think about my concerns. Uh Executive summary that it will make us dumber my my, my reasoning behind that is that a lot of what we’re suggesting, not just us here today, but a lot of what is being suggested is that, you know, it’s, it’s a tool, generative ai is a good tool for a first draft. Uh Beth, you mentioned the Blank Screen syndrome, but to me writing that first draft is the most creative act that we do in writing or in composing, it could be music. And my concern is that if we, if we’re ceding that most creative activity away, and then we’re reducing ourselves to editor or copy editor, not to, not to minimize the folks who make their living editing and copy editing, but it’s not as creative a task for a human as sitting in front of that blank screen or that empty pad for those of us maybe start, maybe start with pen and paper and, and then we’re seeding the most creative activity away and reducing our role to editor, which is an easier job than starting from whole cloth. And so I fear that that will make us uh dumber, reduce our creativity. And I’m saying, you know, generally dumber, you’re all being so polite. You could have just jumped

[00:56:12.96] spk_3:
in. I was well, I, I didn’t want to just interrupt you. Challenge you, but I do want to challenge you. I agree with you, but I also disagree with you. Um So one piece of this one thing that I worry about and it might be um science fiction, but I, um, and I haven’t yet seen research on this, but I do know there’s this thing called Google Brain. You may be familiar with it. Um You’re trying to remember something and you can’t remember it because you haven’t exercised your retrieval muscles from your brain. So you go to Google and you start Googling to, to remember something and it’s a thing called Google Brain. And there was a study that showed that people who were using Google Maps or the other or Apple maps um to navigate. Um it is making their geospatial skills less robust. Um And so the recommendation is you don’t want to completely lose your ability to navigate that you should like get a map, get to go back to a paper map. So there’s definitely some and there is research around this that there’s definitely when you’re doing something in an analog way, if you’re writing it down, it encloses your brain in a different way than if you’re typing it. So the thing that I worry about with this is less about it being creative, taking our creativity away because I think if if you’re trained as a prompted engineer, you could be trained to like brainstorm with it right in a way that sparks your creativity versus takes it away. But what I’m worried about is how does this affect, how will this affect the human brain? Um You know, down the road another decade or so that if we’re not using our brain skills of encoding information and retrieving information and it’s like a muscle, you know, is that going to make us more at risk for dementia or Alzheimer’s down the road? Um, I know it sounds crazy but that’s like the thing I worry about.

[00:56:47.28] spk_0:
I don’t think it’s crazy. That, that’s what I’m concerned about. I’m, I’m concerned on a world level that we all collectively will, will just not be as creative and I’m calling that will be dumber. I

[00:57:49.77] spk_1:
don’t think the amount of creativity and innovation is sort of finite and that if we use tools that we’re no longer going to be creative, I think we have computers now to help us draw, to help us um write, we can write on a computer versus before we had to use different paper, we had to only draw with a limited set of tools when we got, um you know, computer aided graphics and more, we just had more different ways to see the world, more different ways to uh to figure out what images we wanted to see and how we wanted to engage. Also someone who likes to write a lot. I’d say I’m really grateful for my editors and the fat that their brains were different than mine do when I start writing. And so um those skills are complementary. But I say that because I think that we will have to change sort of will evolve, how we think, what we think about and how we work. But I think that is a different type of creativity, different types of innovation rather than us just no longer being creative. Yeah,

[00:57:55.80] spk_0:
I didn’t mean eliminate our creativity but reduce it. It’s

[00:58:10.94] spk_2:
important tony to stay out of these binary arguments of AI is so bad or AI is so good, it is going to be a mix as technology always has been. I was just reading a book the other day that talked about the introduction of moving pictures and how how appalled people were that, you know, they could see these images over and over again, right? And was going to take away all of people’s creativity.

[00:58:23.12] spk_0:
The same thing when when silent movies became talking,

[00:58:36.56] spk_2:
you know, we do this every time we are changing our brains. I’m not saying that we aren’t, however, there is going to be an explosion of creativity of jobs we haven’t thought of yet of opportunities, we haven’t thought of that comes out of this next chapter that we are just beginning now. And I think it’s important to go into this with as much information as we can cautiously again, but with a sense of X with a sense of excitement and adventure as part of this because something really, really interesting is about to unfold.

[01:00:49.90] spk_3:
And I just want to also affirm what Allison just said this kind of new creativity and it was making me think of. Um I think it was about a year ago that dolly came out, which is the image generator um that works by looking at patterns and pixels of images that are on the internet. Um And, and create something new based on your response. And I know um and I heard an artist talking about this, like, you know, there’s this whole debate about, you know, should, is it our tools like dolly that are analyzing pixel patterns and images created by real artists? Are they stealing their work without their consent or without their compensation or is it or is this like creative thinking tool? So I, you know, I was messing around and I have a black and white Labrador party, you know, a Labradoodle party, black and white guy. And so I, I asked, you know, create a image of a black and white party. Labradoodle surfing a wave and the style of Hokusai. And it generated for um images in the style of Hokusai. Some of them were silly. Some of them were, oh, this is really interesting and it prompted me, oh, what would it do if I asked it to do this in the style of Van Gogh or the style of money? And then I started getting all these other ideas about things that I wanted to do. And before I knew it, I had 1000 different images of a black and white party. Labradoodle doing all kinds of things that I wouldn’t even have thought of if I hadn’t seen, like, the response that it gave me from the first one. Um, but so is that different than if I were to, if I just did a brainstorm with myself about what I could draw, if I could draw anything, or is this aided creativity much in the way that an artist would go out, you know, and look at landscapes for inspiration.

[01:01:22.10] spk_2:
Yeah. Now one place, one place in a lot of trouble, tony is the fact that our policy makers are so far behind on AI, right, we’re gonna have enormous copyright issues. We have enormous ethical issues coming up of when AI should be used in policing. The department of Defense is experimenting right now with completely automated lethal drone weapons. Is that really who we want to be that we have robots killing people without any human oversight on the ground at all or, or in, you know, some, some headquarters at all, there are really profound policy issues that we should be talking about right now and we are way behind on those

[01:01:51.16] spk_0:
George you wanna comment on the role of government or, or push back on my

[01:02:45.37] spk_4:
uh the role of government is beyond my pay grade. If I’m honest, um you know, I’ll stick to my scope. I will say though tony in 2004, podcasting became a thing, new technology before that there were gatekeepers there and I think you’ve done very well as like as far as I know the longest running podcast for nonprofits, like it opens up new opportunities. There are over two million images created on Dolly per day and that was back in October. So I’m willing to bet it is increase the output, you know, at, um and on a personal level, like it has increased my output and I have, you know, had a lot of fun building and working with it. And as it, you know, unblocked me for, for the new creation of content undeniably though the way we use tools then shapes the way we change. And I do agree, there is a depth of knowledge potentially lost in being able to simply say, write me an article about this thing and then I tweak it as opposed to that part of learning an approach. And I think academia is um really reeling from how to teach this next generation. And I’m, I’m curiously watching how they train the next generation of people coming into the workforce on

[01:03:24.54] spk_0:
you all gave, well, let me say you all gave your all optimistic about your, your, your, your all probably more optimistic. I’m, I’m, I don’t know if I’m skeptical, I’m just concerned, I’m just concerned about the dumbing down of the culture and the culture, meaning the world

[01:03:31.72] spk_2:
culture, you

[01:03:33.67] spk_1:
know,

[01:03:36.64] spk_2:
have you seen our culture? How much dumber?

[01:03:39.30] spk_0:
Yeah, we’re starting at a pretty low level. That’s, that’s how bad I think it could get. Yeah. Yeah,

[01:05:17.38] spk_1:
I just wanted to uh um just emphasizes, I don’t think we spend enough time on one of Alison’s last points about the, um the copyright issues, the ownership issues, even as the data economy has exploded since the age of big data was declared. Um We have created systems that really extract from certain people, some certain populations, historically marginalized populations rather than enable and empower these same populations who stated we then rely on or I should say corporations in general sometimes oftentimes nonprofits as well. Um And that is just um increased at scale with generative ai with AI more broadly, right? And that um you know, especially with generative ai and things that scrape the whole internet of things that people put out there no longer as George uh mentioned no longer at attributing sources, no longer pointing to source material, no longer giving credit to people. Uh Same with artists and music and others. I think that is a huge issue. And I think one um from an ethical perspective, ethical perspective, especially for a nonprofit whose mission is to empower marginalized communities. And that’s a particular nonprofits mission. It’s a big question to consider of how and when should you use generative ai systems that do not um attribute information. Um And don’t sort of close that loop back to the people who powered the systems?

[01:05:25.25] spk_0:
All right.

[01:05:26.81] spk_1:
I don’t know, that’s a positive note, but it’s a note that was,

[01:07:14.66] spk_0:
that was more mixed and positive but great valuable points, you know, great promise um with potential catches and leadership, the importance of leadership and, and proper usage and all. All right, thanks to everybody for Bruce, you’ll find her on Twitter at underscore Bruce. She’s principle of A and B advisory group, Allison, fine president of every dot org where there are fires to put out. You find Alison on linkedin, Beth Cantor at Beth Kanter dot org and George Weiner, Ceo of whole Whale whole Whale dot com and Georges on linkedin. Thanks everybody. Thanks very, very much. Next week. What power really sounds like using your voice to lead and using your executive skills if you missed any part of this week’s show, I beseech you find it at tony-martignetti dot com. We’re sponsored by Donor Box with intuitive fundraising software from donor box. Your donors give four times faster helping you help others donor box dot org. Our creative producer is Claire Meyerhoff. The shows social media is by Susan Chavez Marc Silverman is our web guy and this music is by Scott Stein. Thank you for that affirmation. Scotty B with me next week for nonprofit radio, big nonprofit ideas for the other 95% go out and be great.