Tag Archives: Afua Bruce

Nonprofit Radio for June 5, 2023: Artificial Intelligence For Nonprofits

 

Afua Bruce, Allison Fine, Beth Kanter & George WeinerArtificial Intelligence For Nonprofits

We take a break from our #23NTC coverage, as an esteemed, tech-savvy panel considers the opportunities, downsides, potential risks, and leadership responsibilities around the use of artificial intelligence by nonprofits. They’re Afua Bruce (ANB Advisory Group LLC); Allison Fine (every.org); Beth Kanter (BethKanter.org); and George Weiner (Whole Whale).

Listen to the podcast

Get Nonprofit Radio insider alerts!

I love our sponsor!

Donorbox: Powerful fundraising features made refreshingly easy.

 

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

Transcript for 643_tony_martignetti_nonprofit_radio_20230605.mp3

Processed on: 2023-06-02T16:46:29.571Z
S3 bucket containing transcription results: transcript.results
Link to bucket: s3.console.aws.amazon.com/s3/buckets/transcript.results
Path to JSON: 2023…06…643_tony_martignetti_nonprofit_radio_20230605.mp3.774321359.json
Path to text: transcripts/2023/06/643_tony_martignetti_nonprofit_radio_20230605.txt

[00:04:19.33] spk_0:
And welcome to tony-martignetti non profit radio. Big non profit ideas for the other 95%. I’m your aptly named host of your favorite abdominal podcast. Oh, I’m glad you’re with me, but you’d get slapped with a diagnosis of algorithm a phobia. If you said you feared listening to this week’s show Artificial Intelligence for nonprofits, we take a break from our 23 NTC coverage as an esteemed tech Savvy panel considers the opportunities downsides potential risks and leadership responsibilities around the use of artificial intelligence by nonprofits. There are fewer Bruce at A N B advisory group LLC Allison. Fine at every dot org, Beth Kanter, Beth Kanter dot org and George Weiner at Whole Whale on Tony’s take to a give butter webinar. We’re sponsored by donor box with intuitive fundraising software from donor box. Your donors give four times faster helping you help others. Donor box dot org. Here is artificial intelligence for nonprofits in November 2022. Chat GPT was released by the company open AI they’re more powerful, maybe Smarter GPT four was released just four months later in March. This year. The technology is moving fast and there are lots of other platforms like Microsoft’s as your AI I guess the sky’s the limit. There’s Google’s help me, right? And Dolly also by open AI creates images. So artificial intelligence can chat and converse answer questions. Do search, draw and illustrate and write. There are also apps that compose music, create video and coding computer languages. A team at UT Austin claims their AI can translate brain activity into words that is read minds and I’m probably leaving things out what’s in it for nonprofits. What are we risking? Where are we headed? These are the questions for our esteemed panel. Bruce is a leading public interest technologist who works at the intersection of technology policy and society. She’s principal of A N B alpha, November, Bravo Advisory group LLC, a consulting firm that supports organizations developing, implementing or funding responsible data and technology. She’s on Twitter at underscore Bruce Alison. Fine is a force in the realm of technology for social good as president of every dot org. She heads a movement of generosity and philanthropy that ignites a profound transformation in communities. You’ll find Allison Fine on linkedin. Beth Kanter is a recognized thought leader and trainer in digital transformation and well being in the nonprofit workplace. She was named one of the most influential women in technology by Fast Company and is a recipient of the N 10 lifetime achievement award. She’s at Beth Kanter dot org. George wegner is CEO of Whole Whale, a social impact digital agency. The company is at whole whale dot com and George’s on linkedin. Welcome all our esteemed panelists. Thanks, welcome to non profit radio. We’re gonna start just big picture. Uh I’d like to start with you just what are you thinking about artificial intelligence?

[00:05:30.10] spk_1:
That is a very big picture question. What am I thinking about artificial intelligence? I think um there are lots of things to consider, I think first is um all of the hype, right? We have heard article after article whether or not we wanted to, I’m sure about the promises and the potential of chat GPT specifically generative AI more broadly. Um Well, uh you think about some of the image based AI solutions, generative AI solutions that are out there that have been in the headlines recently, of course, as someone who’s started their career off as a software engineer where AI has been around for a while. And so sure, generative AI is a different type of application of AI, but it is building on something that has been both out in the world developed for a while. Pre chat GPT most organizations or several companies just embedded AI into the tools you already use, whether it’s gram early or something, I’m betting ai into their solutions. So what I’m thinking about now is how do we help organizations navigate through all of the hype and figure out what’s real, what’s not real, um recognize where they should lean in, recognize where they can take a pause before leaning in and then of course, underlying it all, how do we think about access, how do we think about equity and how do we think about how embracing AI will change or evolve jobs?

[00:05:59.52] spk_0:
And these just define generative ai for us? So everybody knows what, what we’re referring to and we’re all, we’re all on the same platform.

[00:06:08.78] spk_1:
Sure. So, generative AI is where it is essentially looking at a large model. Chat gps specifically uses a large language models. So lots of text and looks at that and then gives you what is statistically sort of the next uh most reasonable or probable word based on a prompt that you give it. So developing the recommendations as you go along,

[00:06:35.79] spk_0:
Allison, please. Yes, big picture.

[00:08:08.00] spk_2:
Well, a few adjust said it beautifully that this isn’t a brand new idea, although we are in the next chapter in terms of advanced digital technology. I think organizations tony need to get their arms around this right now. Ai before AI gets its arms around them and their organizations, Beth and I started to look at AI about five years ago with support from the Gates Foundation and the promise of it is that AI can eat up the road tasks that are sucking the lifeblood out of so many nonprofits, staffers, they are drowning in administrative um tasks and functions and requirements that AI can do very well in fundraising. It might be researching prospects, taking the first cut, communications with donors not sending it out, just taking the first cut, helping with workflow, helping with coordination. Um And the responsibility is for organizational leaders, not line people and not tech people, but organizational leaders to figure out where the sweet spot is what we call co body between what humans can do and need to do. Connect with people, solve problems, build relationships and what we want the tech to do mainly rote tasks right now. So understanding ai well enough tony to figure out where it can um solve what we call exquisite pain points and how to make that balance between humans and the technology is the foremost task for organizations right now.

[00:08:32.35] spk_0:
Death.

[00:10:18.39] spk_3:
Great. So Alison and Noah said it so well. So I’m just going to actually build on it but go into a specific area that where that is kind of the intersection between ai and workplace well being and kind of the question, you know, well, ai fix our work. Um can it transform like the work experience from being exhausting and overwhelming to one that brings more joy that allows us to get things done efficiently but also to free up space to dream into plan? Um And or is it going to be a dystopian future? I don’t think so. Um And by dystopian related to jobs I’m talking about kind of, you know, we’ll get rid of our jobs like who, who will lose out. And um just a week or two ago, the World Economic Forum released a report that predicts that nearly 25% of all jobs will change because of generative ai and it’ll have a, you know, a pronounced impact by displacing and automating many job functions um that involve writing, communicating and coordinating, which is, which are the things that chat GPT can do so much better than previous models. Um But it will also create the need for new jobs, right? I heard a new job description recent, a prompt engineer. So somebody who knows how to ask the types of questions of chat GPT to get the right and most accurate and high quality responses. And I think I’m building on what Alison said about co body. I think this is the future where AI and humans are complementary, they’re not in conflict and it really provides a leadership opportunity to redesign our jobs and to rethink and reengineer workflows so that we enable people to focus on the parts of the work that humans are particularly well suited for. Like relationship building, decision making, empathy, creativity, and problem solving. And again, letting the machines do what they do best but always having the humans be in charge. And again, that’s why Allison and I always talk about this as a leadership issue. Not a technical problem.

[00:10:50.46] spk_0:
Leadership, right? Okay, we’ll get the leader responsibilities. George, what are you thinking about ai

[00:11:30.47] spk_4:
hard to add such a complete start here. But I would say that just because this is a fad doesn’t mean that’s not also a foundational shift and the way we’re gonna need to do work and how leaders are gonna have to respond. I also just want to say like right now, if you’re listening to this podcast, because your boss forwarded it to you saying we gotta get on this. I understand the stress you’re under. It is really tough, I think right now to be in the operational layer of a nonprofit doing today’s work expecting to make tomorrow’s change. So stick with us. We appreciate you listening.

[00:12:03.93] spk_0:
Thank you, George. Like happening to the co host role, which uh which doesn’t exist so careful care. Watch your step. Let’s stay with you, George, you and I have chatted a lot about this on linkedin. Uh use cases. What, what uh what are you seeing your clients doing with ai or what are you, what are you advising that they explore as their um as they’re also managing the stresses that you just mentioned?

[00:13:00.00] spk_4:
Well, right now we’re actively custom building AI is based on the data, voice and purpose of organizations that we work with. One of the concerns that I have is that when you wander onto a blank slate tool, like open ai Anthropic Bard, you name it, you’re getting the generic average as of who pointed out the generic average of that large language model which means you’re going to come off being generic. And so we’re a little concerned about that and are trying to focus our weight on how you tune your prompt engineering toward the purpose of the organization. We’ve already mentioned, grant writing, reporting applications, emails, appeals, customization, social post, blog, post editing. It is all there if you’re using it the right way, I think.

[00:13:22.32] spk_0:
And that gets to the, the idea of the prompt engineer to that, that Beth mentioned what, what you’re so avoiding that generic average with sophisticated prompts. George.

[00:13:47.96] spk_4:
Absolutely. Yeah. I mean, we jokingly call it the great jacket problem where I showed up to a conference and I was wearing the same gray jacket as another presenter and I was like, we both walked into a store and we both thought that the beautiful gray jacket we put on was unique and that we would be seen as such for picking out such a great jacket. When in fact, when you go in to a generic store and get a generic thing, you get a generic output. And my concern is that without that leadership presence saying, hey, here’s how we should be using this with our brand tone voice and purpose that every single new hire out of college. We’re running into the social media game. Beth has already played this game, Allison, we’ve already played this game where we handed the intern the Twitter account because they used it in college. We’re gonna just replay that again and I’d rather just skip that chapter

[00:14:22.42] spk_0:
and that we’re going to get into this too. That, that generic average also has biases and misinformation. False. Well, they’re not false, false information. Um How about you? What are you seeing your clients? What are you advising usage wise?

[00:16:24.89] spk_1:
A couple of things. So, first, I think Allison touched on this as well is that you can sort of take a breath. You don’t have to embrace everything all the time for everything. I know it can seem right now that everyone’s talking about generative ai and how it’s going to change your world. Um But you can sort of take a breath because um as I think Allison and Beth both mentioned, right, the technology is only good if it’s working for our mission, if it’s working for organizations. So really taking the time um as a leadership team to really be clear on what you want to do, what differentiates your organization and make sure your staff is all aligned on. That is the first thing that um advise organizations to do. The second is to think about then the use of AI both to help your organization function and deliver it services out in the world. But then also to think about how it impacts your staff. So I think sometimes we can get caught up in, we’re going to use A I hear it’s going to like, you know, we’ll be able to fix all of our external messaging will be able to produce more reports, will be able to produce more um grant applications, all good, all valid. But remember also, your staff has to learn how to use it and staff has to learn how to make the prompts. Your staff also has work internally that they are doing that. Perhaps AI could be used to help speed up the their task and free up their time and their brain space to lean into what humans do best, which is some of the relationships and having empathy. So thinking also not just about how AI can help you maybe generate more culturally appropriated images for different campaigns around the world or how generative AI can help you fine tune some messaging or how generative AI can help you better sort of segment and deliver services to, to your communities that you serve. But also how you can use AI to do things like help with notes, help with creating agendas, help with transcripts and more what are some of the internal things to really support your staff that you can, you can apply AI towards

[00:16:48.76] spk_0:
Alison that’s leading right to some of those rote tasks that that you mentioned. Um So I’m gonna put it to you in, in, in terms of uh Kirsten Hill on linkedin asked, what’s the best way for a busy nonprofit leader to use AI to maximize their limited time?

[00:18:49.78] spk_2:
So people are looking for some magic solution here, tony and we hate to disappoint them, but AI is not magic fairy dust to be sprinkled all over the organization. Uh This is a profound shift in how work is done. It is not a new word processing, you know, software AI is going to be doing tasks that only people could do until just now. Right? Any other year going back, um people would have had to be uh screening resumes or writing those first drafts, um or, you know, coordinating internally. And now basically the box are capable of doing it, but just because they’re capable of doing it doesn’t mean that you should, you know, unleash the box on your organization. Our friend Nick Hamlin at globalgiving, a data scientist said AI is hot sauce, not catch up a little bit. Goes a long way. We Beth and I have been cautioning people to step very slowly and carefully into this space because you are affecting your people internally and your people externally, right? If a social service agency has always had somebody answering questions of, when are we open? And what am I eligible for? And when can I see somebody? And now a chatbot is doing that, tony, you have to be really careful that one, the chatbot is doing its job well and two that the people outside don’t feel so distant from that organization that it’s not the same place anymore. So our recommendation is, that’s

[00:18:52.67] spk_0:
a, that’s a potential. I mean, it could, I guess mishandled this could change the culture of an

[00:19:36.78] spk_2:
organization. Absolutely. If you are on the outside and you’re accustomed to talking to Sally, who at the front desk and all of a sudden the organization says to you, your first step has to be talking to this chat bot online. Instead the organization has solved perhaps a staff issue of having to answer all these questions all at the same time. But it’s made the interaction with those clients and constituents much worse. So we need to first identify what is the pain point we’re trying to solve with AI is ai the best solution for doing that and then to step carefully and and and keep asking both staff and constituents, how is this making you feel? Right? Do you still feel like you have agency here? Do you still feel like you are connected to people internally and externally and to grow it from there? There is no rush to introduce AI in everything that you do all at once. There is a rush to understand what the impact of automation is on your organization.

[00:21:00.42] spk_0:
It’s time for a break. Stop the drop with donor box. Over 50,000 nonprofits in 96 countries use their online donation platform. Naturally, it’s four times faster, easy payment processing. There are no set up fees, no monthly fees, there’s no contract. How many of your potential donors drop off before they finish making the donation on your website. Stop the drop, stop that drop donor box helping you help others donor box dot org. Now back to Artificial Intelligence for nonprofits with fewer Bruce Allison. Fine Beth Kanter and George wegner. Beth, I see you taking copious notes. I think, I think there’s a lot you want to add.

[00:23:39.85] spk_3:
Oh, there’s so many good points made and I was taking a lot of notes because like nowhere to jump in. Um So a couple of things, uh George said, uh we, we did the social media thing and we turned it over to the intern. Let’s not do that again, but I’m not sure that’s gonna happen because with social media adoption, if we think back, uh you know, the dawn of social media started in 2003, it really wasn’t until six or seven years later. And I remember it quite distinctly when the chronicle, Phil apathy and organizations were really embracing it. There was a lot of skepticism because social media adoption was more of a personal thing because it started as the individual, it wasn’t immediately brought into the workplace. Um And I think chat GPT will be a little bit different because the benefit there is, you know, the sort of the allure of efficiency saving time, right? And or it can help us raise more money. So I think we might see it develop more quickly in the workplace and if nonprofit leaders are, are part do smart adoption, then there will also be the training uh required and the retraining and the re skilling. And I think for me, the most important thing about this is that it is going to change the nature of our work and that if you just let that happen, you’re missing an opportunity because we have a chance to really kind of accelerate workplace line learning, both, you know, formal and informal to, to re skill staff that in a way to embrace this, that’s not going to cause more stress and burnout. The other thing I was thinking about the great jacket and I love that um Metaphor George, I love it. Um In that, you know, if nonprofits are turning to and buying the $20 a month subscription for Chat GPT, they’re getting the Great Jacket version and missing out on the opportunity to really train it. But the other hand, if they’re just going without an organizational strategy, are they being trained in, are they put entering confidential information into Chat GPT? Are they using their critical thinking skills? Because we know that uh chat GPT can hallucinate and pick up crap? Right? Are they really, you know, are they, are they doing that? Like, are they just saying, write me a thank you letter for this donor versus write me a thank you note in the tone of in a conversational tone um that recognizes this donor, you know, quality blah, blah, blah, right? And um and then go back and forth and refine a draft. So, so there’s a piece of like um uh I guess technical literacy that has to be learned and that’s like the technical problem. But then there’s also this whole workplace learning and workplace um uh you know, reengineering of, of jobs and bringing in new jobs and different parts of descriptions that also need to take place as well. So we’ve got to prepare the organization’s culture uh to adopt this in a way that is ethical and responsible.

[00:24:07.24] spk_0:
George you feel any better.

[00:25:12.72] spk_4:
I’m not sure how I felt to begin with, but the uh the, the piece to add on as a nuance, there is not just the generic output but the normalization and ability for people to identify A I created content is going to explode. What does that mean if I were to show you a stock photo right now? Versus when I took on my phone, it would take you 0.5 seconds to be like, yep, stock photo, stock photo, stock photo. And we have all seen the appeals that go out with generic Happy Family with Sunset and background. And I think what’s going to happen is the text that is generated by folks that are using gray jacket G P T s is that your audience is going to see it, identify it and shut it down mentally. It’s like driving past that billboard or that banner ad. It’s going to be a wash. It may seem unique to you. But I think, uh, I think that’s another thing that we’re going to see happen. I just want folks

[00:25:13.82] spk_0:
to know, okay, I just want folks to know that that Great Jacket is a real story. You, you and you and another guy did show up with the exact same jacket

[00:25:21.64] spk_3:
at some point and 10 conference, wasn’t it in New Orleans?

[00:25:24.91] spk_4:
It was, it was a fundraising uh fundraising conference. And actually the other guy’s name was George. So there was two Georges to great jackets. I felt very um silly.

[00:25:38.76] spk_0:
Yeah.

[00:26:29.31] spk_2:
So, um the ultimate R oi Beth and I feel and we wrote about in the smart non profit is what we call the dividend of time that is to use AI to do those rote tasks that I talked about a few minutes ago in order to free up people to do human things. And George the opportunity isn’t we hope to send out more messages or to be, you know, continue down the transactional fundraising path. The opportunity is to use your time to get to know people and to tell them stories and to listen to them. So with or without A I organization stuck in that transactional hamster wheel tony for raising money. And if they can’t get out of that AI is definitely not going to help them. The opportunity here is to move that entire model into the past and say we’re going to create a future where AI gives us the time and space to be deeply relational with people. That’s the opportunity.

[00:27:17.67] spk_0:
Well, I’m gonna come to you in a moment and talk about how we can prevent the, this generic average, this gray jacket uh from taking over our culture. But Alice and I just want to remind you that when I had you and Beth on the show to talk about your book, The Smart non profit, I pushed back on the dividend of time because it feels like the same promise that technology has given us through the decades. And I’m not feeling any more time available now than I did before I had my, my smartphone or um whatever, whatever other technology I’ve adopted that was supposed to have yielded me, yielded me great, great time. I don’t, I don’t, I don’t feel any, any greater time.

[00:28:42.12] spk_2:
I don’t believe that that was the promise before. And certainly what we found with the last generation of digital Tech tony is that it made us always on and everything became very loud and very immediate. No question about it. And this next chapter in AI is not guaranteed to give us time. What we’re saying is there’s an opportunity to work differently and to create this time if leaders know how to use it. Well, that’s the big if, if we’re just going to sit back and said late, let’s ai supersize our transactional fundraising and send, send everybody 700 messages a day because that’s worked so well said very sarcastically then no, it is not going to make us any free up any time. But what we are saying is this technology has the capacity to do all of that work that is sucking up 30 40% of our time a day and we could be freed up. But only if we use it smartly and strategically,

[00:28:51.05] spk_0:
how about, you know, how we can help prevent these generic averages with their biases and marginalization of already marginalized voices. You know how and, and just from the fear of taking over the institutions, culture, how, what are the methods to prevent that?

[00:33:20.42] spk_1:
Um Sorry, I think I would start with an analogy that I’ve used before. That technology is not a naturally occurring resource. There’s no like river of technology that we just walked down to and scoop up and now we have technology and it immediately nourishes us to some of what Alison was just mentioning. Um in order to actually use AI effectively, it takes intentional management, it takes intentional decisions about how to use it when to use it and why to use it. And so that definitely applies when we think about how do we differentiate, differentiate ourselves even as we use AI and also how do we make sure that we then are being intentionally inclusive? Um I don’t know of any technology that just by happenstance has been inclusive. Um And so it requires intentional decisions. So some ways that bias can appear in generative ai systems are with some of the, the coding that is done inherently with some of the data sets that are used. Even with large language models, they reflect right now every on the internet. Um I know a lot of great people on the internet, there’s a lot of things on the internet that do not align with my values, um or even my actual lived experience. Um And so how do we then think about sort of combating that? So I think one, we’ve already touched on prompt engineering to make sure that we are asking it the things that we want to get back if you ask chat GPT, for example, um to describe what, what are risks with chat with generative AI will give you one list. You refine that prompt to include specifically what a risk with chat with generative ai including or specifically affecting women or people of color. It will give you a more refined response. Chat GPT a month ago. If you asked it, the doctor and nurse were arguing because he was late, who was late. It would tell you the doctor was late. He asked the same question but said because she was late, it would tell you um it was the nurse that was late, that now has changed because the people who are programming to GPT have manually made those changes. So as we think about how we can use it, it is through some of the software that we’re building on top of it, some of the plug ins that you decided to take advantage of, to not take advantage of how you might be able to use it on your own sort of proprietary information with the right parameters in place to keep it on your, keep it with your own data in ways that make sense for your organization there. Um I think it’s an opportunity for funders to fund the creation of new data sets or fund the creation some more responsible plug ins or fund um you know, new open source developments as well. So I think that’s an exciting play there. Um And then I think also there is an opportunity to use chat GPT or sorry, generative AI in ways that really do enable more representation. Um Working with someone who is um an advocate for women’s rights in India. We’re talking through ways that she could more quickly generate posters and informational materials using generative AI for both images and text for different places on the subcontinent that she couldn’t physically get to. Um And that she didn’t have talent on the ground to get to. That is different though I’ll say from the announcement from LEVI a couple of months months ago that they were going to use chat cheaper generative AI to create a diversity of models rather than hiring people or buzzfeed recently saying um shareholders meeting that they would use AI to help create authentic black and Latino voices presumably um instead of talking to actual authentic black. So um they didn’t, she was a statement a day or two later saying no, no, no, that’s not what we meant, we meant something else. Um But, but my point is there are ways to think about how you can use generative ai as a nonprofit organization to better reach and connect. But also make sure that you are still doing it in a way as I think all of us have said so far, that really does center people that does center communities and isn’t trying to necessarily replace those relationships.

[00:34:11.43] spk_0:
Beth our our master trainer, I see a need for training for leaders for for for users. I mean, I’m not seeing any of this happening now, I’m not seeing how to use, you know, but is there, is there a training issue here for, for people at all levels? You’re sorry,

[00:35:55.78] spk_3:
sorry about that. I don’t want them back. Absolutely yes. But we, I make a distinction between training and learning. Alright. So training professional development, formal ways of learning particular skills and those might be more around the technology, literacy, literacy skills like, you know, prompt engineering, for example. But then there’s also the informal piece of learning which is informally uh discussions with different teams about how it’s changed their job, right? Or uh or, or reflecting on a job description or, or job workflow that needs to be changed and then sharing that with other departments. Um So, you know, so there’s kind of like workplace learning that is connected to with the workplace culture. Um and which in some ways has nothing to do with the technology. It’s kind of like as a result of the technology. Uh what do we now have the possibility to do because we have this freed up time or because we have not spent so much time staring at a blank screen and not doing anything because of blank screen syndrome. You know, chat DBT has like helped us get to that first draft quicker and maybe human editing has done the second and the third, third draft. Um uh and we’ve gotten a better result. Um And that has improved our end results with our fundraising goals or whatever we’re trying to accomplish. Um you know, what comes next. Um So those are the pieces of learning that, um you know, that haven’t been possible a lot of times in nonprofits because we’re so busy trying to get the stuff done on our to do list and, and or were being overwhelmed. So, um so what, what is possible now that we’re able to do our jobs better and we’re able to take on these different tasks. How can we improve our results? Um And outcomes,

[00:36:24.68] spk_0:
George, how are you teaching your, your clients who are hopefully translating that into learning about using non using generative ai are you, are you talking directly to leaders? Are you, are you training users on, on better like skills like better prompting? What’s what does teaching training look like for you?

[00:38:14.82] spk_4:
I mean, we’ve done our best to put out as much free content as possible, first and foremost, to try to, you know, raise the tide of understanding for nonprofits and we’re putting all of that out as fast as I can think to create it internally. We’re having weekly training sessions on use cases for us and we’re actively building and improving on client custom created GPT uh endpoints that pull their data in and their purpose in. I want to go back though to Beth talking about what actually, you know, education and this looks like and we could train you on how to swim over this podcast. We could talk about all the things you need to do. Like I’m watching my daughter learn to swim. There’s no storybook, there’s no encyclopedia, there’s no webinar that you could watch that would teach you how to swim. There is a fundamental component of this. If you jumping in the water and interacting with the tool learning, coming back, realizing where it frankly lies to you. As I am really happy, we have all pointed out where it hallucinates where it’s helpful and where the opportunities are. And by the way that’s gonna change next month and so it’s not a single point in time and, you know, this, you, you’ve been an engineer for, you know, a while and seen it’s like the, you know, the code you played with, you know, a month ago, it’s just different tomorrow and what’s possible is different tomorrow. Um On the other side of the coin, I’m a little concerned, you know, we have gone through and maybe you’re getting anxiety when you hear yet another tool. Yet another tool. There’s over 1600 tools listed on just one site, future tools dot IO. And there’s going to be even more tomorrow. There are 95% of these things that are just going to be gone within a year. So I’m also cognizant of the rabbit holing that can happen in this.

[00:41:48.75] spk_0:
It’s time for Tony’s take two. I’m doing a Give Butter webinar later this month, debunk the top five myths of Planned Giving. I am especially excited about this one because the Give Butter host Floyd Jones and I are gonna be together co located face to face person to person in person real time. So, uh the energy that he brings and I try to keep things light moving. I think we’re gonna have quite a bit of infotainment on, on this one with Give Butter debunked the top five Myths of Planned Giving and it’s Wednesday, June 14th at two p.m. Eastern time. But you don’t, you don’t need to be there you can get the recording. If you can’t make it live. Watch archive. I used to say that on the show, listen, live or archive now it’s just listen, archive no more live but this is listen, live or archive bonafide. Uh If you want to make a reservation, you go to give butter dot com, then resources and resources and events. Very simple. So make the reservation. If you can join us live, that would be fun because I love to shout folks out and I’ll answer your questions. If you can’t sign up and watch the video, it’s all at give butter dot com resources and then events that is Tony’s take two, we’ve got the boo koo but loads more time for artificial intelligence for nonprofits, I’d like to turn to some of the some of the downsides even more explicitly. So we’re all talking about efficiency and uh the the time time saved the dividend of time. But um at what cost, what potential cost, short term, long term, um We’ve already talked about, you know, they’re being a bias towards dominant voices that are existing, dominant voices remaining dominant. Um For you had a great example of someone in in in India, right? Trying to, trying to represent folks that she can’t get to see. So there, I mean, there’s a potential upside but you know, all this at, at what uh at what potential cost and then there’s, we haven’t even mentioned, we mentioned false information, but in the video realm, deepfakes, video and audio, deepfakes, photograph, deepfakes. Who wants to, who wants. I’m being an egalitarian there who wants to uh launch us into the, the risks and downsides part of the conversation.

[00:41:54.45] spk_1:
I’m happy to start, I’ll say for the record, I am generally an optimist. However, um there, there

[00:42:02.41] spk_0:
are some things uh we’ve taken judicial notice.

[00:44:17.34] spk_1:
Thank you. Thank you for the record. It has been noted, I appreciate that. Um So again, just reiterating what we’ve already said, intentionality really matters here without intentionality. Um Things can go really wrong because General Ai has the ability to hallucinate. Um And because General Ai is reacting to what data already exists, recognize that sometimes the things that decisions that we can make based on that could be really wrong. So um if you can think through and imagine how Ai might be used to help with hiring processes, um even with a more standard version of AI, for example, Amazon a few years ago, put some work into developing a system that would identify people who were best poised to be managers and succeed in senior management at Amazon. The results of the AI show that white men from particular schools were best boys. Is this actually true based on skills? No, but it was based on the data that they had, which was trained on their internal data, which showed being a company and Northwest, it just reflected what their practices had been in some of the things they changed. Amazon end up not rolling that out because they had a human in the loop there that sort of looked at what was coming out and showed that in reviewed and determined this is not actually in line with our values is not in line with what we’re trying to do. Um So I think uh pushes to completely remove a human from that decision making loop are ways that generative ai can go really wrong very quickly in organizations think we’ve already started to talk about some of the bias that can appear in results. Um give the example already with gender that is true for um along a number of other demographics as well. And so not correcting for that or recognizing even that even with these large language models, even with something that’s trained on the internet, um not everyone is represented there. And so making a lot of decisions based on what’s there may not give you and may not give you the most inclusive and equitable response that you want. I think those are two ways that this can go wrong.

[00:44:33.58] spk_0:
Allison anything you wanna, you wanna add to this? Sure.

[00:45:47.94] spk_2:
Um So the AI revolution is far bigger than Chad GPT in generative AI AI is going to be built into every software product that an organization buys in. Finance in hr in, you know, customer service in development. Those products were created by programmers who are generally white men and then trained on historic data sets, which as you just mentioned, are deeply biased as well. So you have a double whammy that by the time the product gets to an organization, it has gender and racial bias baked right into it. This again is why it’s a leadership problem, tony, we need organizations to know what to ask about these products, to ask how it was built, what assumptions were made in building and how it was tested for bias, how you can test for bias before that hr software program you just grabbed through into your mix is screening out all of the black and brown people applying for these positions. So these are real everyday concerns about integrating AI into work and why we need to be careful and strategic and thoughtful about how we’re integrating it into organizations.

[00:47:32.67] spk_3:
Yeah, Beth, I really want to pick up on a point that a film made about um the concern about not having human oversight at all times. And one of my favorite examples of this comes from Kentucky Fried Chicken in Germany. And um they were using a generative ai tool that was um that could develop different promotions that they could put out there. And the data set that it was using was a the calendar of holidays in Germany and of course, then some promotional language like 5% off cheesy chicken, right? Um And they got into trouble because there, there was a lot of social media messaging that was just put out their generated by the generative ai and the message was um happened on November 9th, which is the anniversary of Kristallnacht, which is considered the beginning of the Holocaust. And the, and the promotion was, you know, enjoy $5 off a cheesy chicken to celebrate the night of broken glass. And, you know, and so I think that the issue is, is that we begin to put so much trust into these tools that we think of them as human or the equivalent of human intelligence. And that, you know, we just take it for face value and we don’t have that human intervention with those critical thinking skills. And um and that’s where harm could be done um to the end users. Um So I, I just really think it’s comes back to that co batting example that we’ve talked about and again, the, you know, the need for leaders to really be reflective and strategic in how they executed. It’s not just about learning how the right prompts to ask GPT chat to get a particular output.

[00:48:10.15] spk_0:
There was another example of that uh at, I think it was at a college. Uh they put out a press release and at the bottom of the email, it said, you know, generated by chat GPT or something. I mean, so a human, you’ve all talked about humans being involved with the technology you know, a human hadn’t even scanned it to, uh, to know to take that, that credit line off the, off the email. So, you know, like blind usage.

[00:48:58.01] spk_3:
That’s an interesting thing to, to think about. Like, um, do I disclose, like, if I, if I was writing a post an article and I went to GPT chat to, like, because I needed to get it from 1000 words to 750 words. And I could ask it, you know, too long. Didn’t read standby for some text, please reduce from 1000 words to 750 words um which I actually have used, but I don’t take a cut and paste and I actually sat and compared what it, how did, how did it change the language? And one thing I did notice is it took out any sentences that had a lot of personality to them and it transformed it into this very generic kind of text, you know. So again, it requires a human editorial oversight. If you will,

[00:49:20.80] spk_0:
George, you want to talk about risks downsides.

[00:50:17.62] spk_4:
Yeah, I would say this is more of a bigger picture risk that I see as the net result of we’re talking about GPT tools being built into everything we use. One is that, you know, if, if you were using it blindly, you were the product you’re handing over information. Uh There was a actual open ai hack. Well, a hack or data leak where all of the conversations that were being uh stored on the side were accidentally shared and open. And so I think that’s something to be aware of bigger picture. I am watching very closely. The impacts of chat, first search chat, first search bard and being barred is Google’s AI that is now rolled out out of their private into a public beta is going to destroy organic traffic for information based searches to nonprofits. Inside of what I believe is the next two years. The second order effects of that are so many that we would need several podcasts to understand, but I’m no longer telling clients that we should expect more organic traffic next year. Versus this year.

[00:50:57.37] spk_0:
You experienced this with your own with the whole whale site. You, you had, you had, you did a search and it gave and the search tool gave you back some of your whole whale content. It did credit it. But then your concern was that that credit was purely optional, but right, you, you experience this with your own, with your own intellectual property.

[00:52:14.75] spk_4:
I’m watching it across a lot of, you know, we get roughly 80,000 month in terms of monthly users looking for information that we put out there. I test what that looks like when I do similar searches on bing as well as perplexity dot AI and now barred. The thing that scared me the most is that bar just sort of decided not to even bother with the footnotes in its current iteration and just gave the answer to one of uh several articles that dr significant traffic to our site. There are two types of traffic that S C O is providing. It is informational and then transactional. And so for the informational, I would encourage your organization to do some of these sample searches and begin to plan accordingly. And it makes me a little sad that that part of nonprofits ability to be a part of the conversation when somebody’s asking for, I don’t know information about prep and HIV information or something about L G B T Q rights history doesn’t get you engaged with the organization. It just gives you the answer and there’s something missing there that I think is going to have negative downstream impacts for social impact organizations. And

[00:52:22.87] spk_0:
you expect to see declines in there

[00:52:38.37] spk_4:
will be a decline, significant declines. And that’s concerning to me because it’s cutting non profits out of the conversation that they have traditionally been a part of when people are looking for information. And especially in a time where we’re going to have a rapid increase in disinformation because these tools can be used to create that at scale.

[00:54:19.95] spk_0:
We already have enormous disinformation. It’s hard to imagine it growing exponentially or logo rhythmically. Um I’m interested in what you all think about my concerns. Uh Executive summary that it will make us dumber my my, my reasoning behind that is that a lot of what we’re suggesting, not just us here today, but a lot of what is being suggested is that, you know, it’s, it’s a tool, generative ai is a good tool for a first draft. Uh Beth, you mentioned the Blank Screen syndrome, but to me writing that first draft is the most creative act that we do in writing or in composing, it could be music. And my concern is that if we, if we’re ceding that most creative activity away, and then we’re reducing ourselves to editor or copy editor, not to, not to minimize the folks who make their living editing and copy editing, but it’s not as creative a task for a human as sitting in front of that blank screen or that empty pad for those of us maybe start, maybe start with pen and paper and, and then we’re seeding the most creative activity away and reducing our role to editor, which is an easier job than starting from whole cloth. And so I fear that that will make us uh dumber, reduce our creativity. And I’m saying, you know, generally dumber, you’re all being so polite. You could have just jumped

[00:56:12.96] spk_3:
in. I was well, I, I didn’t want to just interrupt you. Challenge you, but I do want to challenge you. I agree with you, but I also disagree with you. Um So one piece of this one thing that I worry about and it might be um science fiction, but I, um, and I haven’t yet seen research on this, but I do know there’s this thing called Google Brain. You may be familiar with it. Um You’re trying to remember something and you can’t remember it because you haven’t exercised your retrieval muscles from your brain. So you go to Google and you start Googling to, to remember something and it’s a thing called Google Brain. And there was a study that showed that people who were using Google Maps or the other or Apple maps um to navigate. Um it is making their geospatial skills less robust. Um And so the recommendation is you don’t want to completely lose your ability to navigate that you should like get a map, get to go back to a paper map. So there’s definitely some and there is research around this that there’s definitely when you’re doing something in an analog way, if you’re writing it down, it encloses your brain in a different way than if you’re typing it. So the thing that I worry about with this is less about it being creative, taking our creativity away because I think if if you’re trained as a prompted engineer, you could be trained to like brainstorm with it right in a way that sparks your creativity versus takes it away. But what I’m worried about is how does this affect, how will this affect the human brain? Um You know, down the road another decade or so that if we’re not using our brain skills of encoding information and retrieving information and it’s like a muscle, you know, is that going to make us more at risk for dementia or Alzheimer’s down the road? Um, I know it sounds crazy but that’s like the thing I worry about.

[00:56:47.28] spk_0:
I don’t think it’s crazy. That, that’s what I’m concerned about. I’m, I’m concerned on a world level that we all collectively will, will just not be as creative and I’m calling that will be dumber. I

[00:57:49.77] spk_1:
don’t think the amount of creativity and innovation is sort of finite and that if we use tools that we’re no longer going to be creative, I think we have computers now to help us draw, to help us um write, we can write on a computer versus before we had to use different paper, we had to only draw with a limited set of tools when we got, um you know, computer aided graphics and more, we just had more different ways to see the world, more different ways to uh to figure out what images we wanted to see and how we wanted to engage. Also someone who likes to write a lot. I’d say I’m really grateful for my editors and the fat that their brains were different than mine do when I start writing. And so um those skills are complementary. But I say that because I think that we will have to change sort of will evolve, how we think, what we think about and how we work. But I think that is a different type of creativity, different types of innovation rather than us just no longer being creative. Yeah,

[00:57:55.80] spk_0:
I didn’t mean eliminate our creativity but reduce it. It’s

[00:58:10.94] spk_2:
important tony to stay out of these binary arguments of AI is so bad or AI is so good, it is going to be a mix as technology always has been. I was just reading a book the other day that talked about the introduction of moving pictures and how how appalled people were that, you know, they could see these images over and over again, right? And was going to take away all of people’s creativity.

[00:58:23.12] spk_0:
The same thing when when silent movies became talking,

[00:58:36.56] spk_2:
you know, we do this every time we are changing our brains. I’m not saying that we aren’t, however, there is going to be an explosion of creativity of jobs we haven’t thought of yet of opportunities, we haven’t thought of that comes out of this next chapter that we are just beginning now. And I think it’s important to go into this with as much information as we can cautiously again, but with a sense of X with a sense of excitement and adventure as part of this because something really, really interesting is about to unfold.

[01:00:49.90] spk_3:
And I just want to also affirm what Allison just said this kind of new creativity and it was making me think of. Um I think it was about a year ago that dolly came out, which is the image generator um that works by looking at patterns and pixels of images that are on the internet. Um And, and create something new based on your response. And I know um and I heard an artist talking about this, like, you know, there’s this whole debate about, you know, should, is it our tools like dolly that are analyzing pixel patterns and images created by real artists? Are they stealing their work without their consent or without their compensation or is it or is this like creative thinking tool? So I, you know, I was messing around and I have a black and white Labrador party, you know, a Labradoodle party, black and white guy. And so I, I asked, you know, create a image of a black and white party. Labradoodle surfing a wave and the style of Hokusai. And it generated for um images in the style of Hokusai. Some of them were silly. Some of them were, oh, this is really interesting and it prompted me, oh, what would it do if I asked it to do this in the style of Van Gogh or the style of money? And then I started getting all these other ideas about things that I wanted to do. And before I knew it, I had 1000 different images of a black and white party. Labradoodle doing all kinds of things that I wouldn’t even have thought of if I hadn’t seen, like, the response that it gave me from the first one. Um, but so is that different than if I were to, if I just did a brainstorm with myself about what I could draw, if I could draw anything, or is this aided creativity much in the way that an artist would go out, you know, and look at landscapes for inspiration.

[01:01:22.10] spk_2:
Yeah. Now one place, one place in a lot of trouble, tony is the fact that our policy makers are so far behind on AI, right, we’re gonna have enormous copyright issues. We have enormous ethical issues coming up of when AI should be used in policing. The department of Defense is experimenting right now with completely automated lethal drone weapons. Is that really who we want to be that we have robots killing people without any human oversight on the ground at all or, or in, you know, some, some headquarters at all, there are really profound policy issues that we should be talking about right now and we are way behind on those

[01:01:51.16] spk_0:
George you wanna comment on the role of government or, or push back on my

[01:02:45.37] spk_4:
uh the role of government is beyond my pay grade. If I’m honest, um you know, I’ll stick to my scope. I will say though tony in 2004, podcasting became a thing, new technology before that there were gatekeepers there and I think you’ve done very well as like as far as I know the longest running podcast for nonprofits, like it opens up new opportunities. There are over two million images created on Dolly per day and that was back in October. So I’m willing to bet it is increase the output, you know, at, um and on a personal level, like it has increased my output and I have, you know, had a lot of fun building and working with it. And as it, you know, unblocked me for, for the new creation of content undeniably though the way we use tools then shapes the way we change. And I do agree, there is a depth of knowledge potentially lost in being able to simply say, write me an article about this thing and then I tweak it as opposed to that part of learning an approach. And I think academia is um really reeling from how to teach this next generation. And I’m, I’m curiously watching how they train the next generation of people coming into the workforce on

[01:03:24.54] spk_0:
you all gave, well, let me say you all gave your all optimistic about your, your, your, your all probably more optimistic. I’m, I’m, I don’t know if I’m skeptical, I’m just concerned, I’m just concerned about the dumbing down of the culture and the culture, meaning the world

[01:03:31.72] spk_2:
culture, you

[01:03:33.67] spk_1:
know,

[01:03:36.64] spk_2:
have you seen our culture? How much dumber?

[01:03:39.30] spk_0:
Yeah, we’re starting at a pretty low level. That’s, that’s how bad I think it could get. Yeah. Yeah,

[01:05:17.38] spk_1:
I just wanted to uh um just emphasizes, I don’t think we spend enough time on one of Alison’s last points about the, um the copyright issues, the ownership issues, even as the data economy has exploded since the age of big data was declared. Um We have created systems that really extract from certain people, some certain populations, historically marginalized populations rather than enable and empower these same populations who stated we then rely on or I should say corporations in general sometimes oftentimes nonprofits as well. Um And that is just um increased at scale with generative ai with AI more broadly, right? And that um you know, especially with generative ai and things that scrape the whole internet of things that people put out there no longer as George uh mentioned no longer at attributing sources, no longer pointing to source material, no longer giving credit to people. Uh Same with artists and music and others. I think that is a huge issue. And I think one um from an ethical perspective, ethical perspective, especially for a nonprofit whose mission is to empower marginalized communities. And that’s a particular nonprofits mission. It’s a big question to consider of how and when should you use generative ai systems that do not um attribute information. Um And don’t sort of close that loop back to the people who powered the systems?

[01:05:25.25] spk_0:
All right.

[01:05:26.81] spk_1:
I don’t know, that’s a positive note, but it’s a note that was,

[01:07:14.66] spk_0:
that was more mixed and positive but great valuable points, you know, great promise um with potential catches and leadership, the importance of leadership and, and proper usage and all. All right, thanks to everybody for Bruce, you’ll find her on Twitter at underscore Bruce. She’s principle of A and B advisory group, Allison, fine president of every dot org where there are fires to put out. You find Alison on linkedin, Beth Cantor at Beth Kanter dot org and George Weiner, Ceo of whole Whale whole Whale dot com and Georges on linkedin. Thanks everybody. Thanks very, very much. Next week. What power really sounds like using your voice to lead and using your executive skills if you missed any part of this week’s show, I beseech you find it at tony-martignetti dot com. We’re sponsored by Donor Box with intuitive fundraising software from donor box. Your donors give four times faster helping you help others donor box dot org. Our creative producer is Claire Meyerhoff. The shows social media is by Susan Chavez Marc Silverman is our web guy and this music is by Scott Stein. Thank you for that affirmation. Scotty B with me next week for nonprofit radio, big nonprofit ideas for the other 95% go out and be great.

Nonprofit Radio for September 19, 2022: The Tech That Comes Next

 

Amy Sample Ward & Afua Bruce: The Tech That Comes Next

Social impact orgs, technology developers, funders, communities and policy makers can all do better at technology development, argue Amy Sample Ward and Afua Bruce in their new book, “The Tech That Comes Next.”

 

 

 

 

Listen to the podcast

Get Nonprofit Radio insider alerts!

I love our sponsors!

Turn Two Communications: PR and content for nonprofits. Your story is our mission.

Fourth Dimension Technologies: IT Infra In a Box. The Affordable Tech Solution for Nonprofits.

Apple Podcast button

 

 

 

We’re the #1 Podcast for Nonprofits, With 13,000+ Weekly Listeners

Board relations. Fundraising. Volunteer management. Prospect research. Legal compliance. Accounting. Finance. Investments. Donor relations. Public relations. Marketing. Technology. Social media.

Every nonprofit struggles with these issues. Big nonprofits hire experts. The other 95% listen to Tony Martignetti Nonprofit Radio. Trusted experts and leading thinkers join me each week to tackle the tough issues. If you have big dreams but a small budget, you have a home at Tony Martignetti Nonprofit Radio.
View Full Transcript

Transcript for 609_tony_martignetti_nonprofit_radio_20220919.mp3

Processed on: 2022-09-16T13:48:08.345Z
S3 bucket containing transcription results: transcript.results
Link to bucket: s3.console.aws.amazon.com/s3/buckets/transcript.results
Path to JSON: 2022…09…609_tony_martignetti_nonprofit_radio_20220919.mp3.155826533.json
Path to text: transcripts/2022/09/609_tony_martignetti_nonprofit_radio_20220919.txt

[00:02:21.94] spk_0:
Hello and welcome to Tony-Martignetti non profit radio big non profit ideas for the other 95%. I’m your aptly named host of your favorite abdominal podcast. Oh, I’m glad you’re with me. I’d bear the pain of pseudo ag raffia if I had to write the words you missed this week’s show the tech that comes next social impact orgs, technology developers, funders, communities and policymakers can all do better at technology development for greater equity, argue Amy sample Ward and Bruce in their new book, The tech that comes next tony take two heading to the Holy Land. We’re sponsored by turn to communications pr and content for nonprofits. Your story is their mission turn hyphen two dot c o and by fourth dimension technologies I. T. Infra in a box the affordable tech solution for nonprofits. tony-dot-M.A.-slash-Pursuant D Just like 3D but they go one dimension deeper. It’s my pleasure to welcome Amy sample Ward returning she’s the ceo of N 10 and our technology and social media contributor there at AMY sample ward dot org and at Amy R. S Ward and to welcome Bruce. She is a leading public interest technologist who has spent her career working at the intersection of technology policy and society. She’s held senior science and technology positions at data kind, the White House, the FBI and IBM She’s at a few a underscore Bruce who is a F. U. A. Together they’ve co authored the book the tech that comes next how change makers, philanthropists and technologists can build an equitable world. Their book is at the tech that comes next dot com. Amy welcome to nonprofit radio Thanks

[00:02:26.04] spk_1:
for having us.

[00:02:27.35] spk_2:
I’m glad to

[00:02:28.07] spk_1:
hear what you

[00:02:29.25] spk_0:
think both of you for the first time. Very nice to meet you. Glad to have you.

[00:02:34.51] spk_2:
I’m so excited to be here.

[00:02:53.06] spk_0:
Thank you, excited. That’s terrific. You may be more excited than I am. I don’t know, but I know I’m very excited. I’m very pleased. I already said I was pleased, excited. Is excited is even better than pleased. Thank you. Uh let’s start with you since people know AMY sample ward voice. Um I feel like we should start with a definition of technology the way you to see it.

[00:03:45.79] spk_2:
Absolutely technology can mean many things to many different people and even when people just simply hear the word of technology here, the word technology contra and hope of the future and assistive devices that may transform our world, but it can also bring up feelings of in trepidation and confusion and so in the book, when we talk about technology, we define it very broadly as to what our tools that exist to help us really exist in the world. Um and so this can be anything from digital systems and websites and like AI for example, but it’s also more basic things such as you know, pay deeper or other tools that are just used. And so we define it extremely broadly in the book. The focus of the book does focus on digital technologies though and really looking at adoption and use and development of digital technologies especially as it relates to the social impact sector

[00:04:07.28] spk_0:
and what what troubles you about our relationship to technology?

[00:04:36.38] spk_2:
Um, well I am an engineer, a computer engineer specifically. And so I love technology. I love being in technology. I love doing all sorts of things with technology. I love designing new ways to use technology and figuring out how to design technology to support new ways of interact that we have. I think one of the things that

[00:04:41.22] spk_1:
does

[00:05:34.13] spk_2:
give me pause though is how some see technology or some try to position technology as the be all and end all the magic solution that we could have to solve all of our problems. And that if we simply find the right technology, if we simply insert technology into any societal problem that we’re facing, that that technology will magically fix whatever we have been facing. And that’s simply not true technology not a natural phenomenon. It is something that we could create. It’s something that we should be intentionally creating to minimize bias to make sure that technology is developed and used in inclusive ways and really does enhance what we want to do as humans, which is hopefully live well together in community. Um and not just be used as some big tool to force uh different, often um different, often disproportionately impacting outcomes

[00:05:45.54] spk_0:
and you have a lot to say about development specifically more more equitable development.

[00:07:43.75] spk_2:
Yes, absolutely. Um I think equitable development of technology is something that can and should be continuing to grow. I think historically, especially when we look at the past several decades of the rise of digital technologies and technology more broadly the um the power, the money, the education has been concentrated in one group and a lot of other groups, it includes a lot of historically underrepresented or overlooked communities um based on ethnicity, based on gender identity, based on sexuality, based on ability, physical ability, mental ability or more um have really been left out or forgotten about. And so when we talk about a more inclusive design process and more inclusive development process for technology, we’re talking about one being more inclusive to who is actually allowed in the room when we talk about technology design. So who do we see as capable of being technologist um and who have who has those abilities to engage that way, but also recognizing that because technology does not exist alone, but because technology doesn’t exist in a vacuum, because technology can’t magically solve all of our problems on our own. Even if you’re not a technologist, you should be at the table in some of these design conversations because you are part of communities that have needs and those needs should be articulated at the start of the design process. You might understand a particular subject matter. I think in the book we talk about using technology in the education space, in the food space in other spaces as well, you may have some of that knowledge that is critical to making sure that the technology supports the overall goals of those sectors. And so it is important that as we think about being inclusive in developing technology, we make space for not just different types of people who are able to be technologists, but also different types of expertise that we need in that developed process.

[00:08:09.00] spk_0:
So you’re not so pleased with the model where rich, privileged white males develop technology develop, identify what’s going to be solved and how best to solve it. I I assume that that model is not working for you.

[00:08:27.26] spk_2:
I would say I would go even further than

[00:08:30.25] spk_0:
going out and

[00:08:30.92] spk_2:
it’s not working for most of us. Um so it is not working for most of us to have the power concentrated in that

[00:08:52.64] spk_0:
way. Okay. And in fact, uh someone see, I don’t know who wrote which sentences, but somebody wrote. We can’t continue to perpetuate the belief that those with the most money know best. I don’t know, maybe your editor put that in. You may not even be one of the two of you. I don’t know. Maybe

[00:08:57.13] spk_2:
I

[00:08:58.76] spk_0:
trust

[00:09:36.43] spk_2:
me amy and I spent many, many hours on many, many aspects of writing and editing to make sure that what is in the book. We both stand behind. And so absolutely with that sentence. Something that I think we we both stand behind. Um We can’t let you said we can’t let one small population in this case rich privileged white men be the ones who design all of the technology and decide all of the outcomes for everyone. We really need to. And in the book we talk a lot about how it’s so important, why it is so important to go back to communities and communities who understand their needs to understand their priorities and let communities drive that process. That would then include um policymakers. That then includes funders that that includes um technologists themselves and that includes

[00:09:53.21] spk_0:
uh

[00:09:54.53] spk_2:
the leaders and employees at social impact organizations.

[00:09:58.85] spk_0:
Another aspect of it is that just what’s what problems get solved? What what what gets attention?

[00:10:06.18] spk_2:
Absolutely. And um I think we have lots of ideas on this, but I have been talking for so long. Um I would love to pass it.

[00:10:29.88] spk_0:
We’ll get a simple word gets amy sample Ward will get their chance. Okay. Um Alright, if you insist for All right. Um Okay, if we have to go to Amy Now. All right. Uh You say somebody wrote this sentence. Uh Exactly related to what I was just saying. We dream of community centered work that builds from community centered values and there’s a lot of emphasis on going back to values. Um Why don’t you uh just sort of introduce us to the some of those values amy

[00:14:05.53] spk_1:
sure. Happy to. um I think that you know one thing we say in the book and we’ve we’ve enjoyed getting to talk to a lot of groups about since the book has come out is that everything we do as people is centered on values, but often times we don’t talk about them, we don’t make sure that our values are aligned when we start working on something. And so then those values become a some and I think we’ve all heard many different puns about what happens when you operate on assumption. Right. And so that’s that’s kind of part and parcel of also assuming that the only people that can make technology are people with certain degrees or that have a certain amount of money or that you know look a certain way. Um again that that’s those are values that we’re not talking about and that we need to talk about so that we can be really intentional about what we want to focus on. Um and in the book, you know of who has already been speaking to some of those values that there’s important role and we need to prioritize lots of different lived experience as an important part of any technology project. Um that a lot of different people should be involved in every single stage of that process, not like at the end, once we build something and we like pull the pull the little cover off and are like today we built it, what do you think there should be no pulling the cover off? You know, everyone should have already been part of it and known it was being built this whole time. Um but also values that I think are important to can it name early in the conversation around accessibility, so much of the barriers and the walls around technology projects that are there, you know again, whether people are talking about them explicitly or not that are maintaining this this false reality, that only certain people can be involved are coming from a place of saying oh we speak a certain way we use these acronyms. We we talk about things without slowing down for other people to be involved. So what does accessibility look like? Not just that a tool could be used with a system of you know devices but really that you are not using jargon that you’re making sure things are being held at the time of day when those folks that you want involved can be there. Um that child care is provided that your user group meetings, you know every level that you are operating in ways that really do make things accessible to everyone. Um and I think another value that we like to talk about early in conversations is the book is kind of a big idea like the world is not the world we have right now, like what if it was not this, what if it was equitable and just and wonderful. Um, and I know you want to talk about the illustrations colorful uh, you know, so to get there. It’s not like two steps. It’s not okay. That’ll be on like the 2024 plan, right? It’s a lot of work. And so technology and the relationship and expectations we put on it just like social change are that we can make incremental right now immediate changes and at the same we can be working on really big changes. The shifts that get us to a very different world that we have to do both. We can’t just say, well let’s live with harmful technologies and and harmful realities until we can all of a sudden just change over to the like non harmful one. Um, you know, we need to make changes today as we’re building for bigger change.

[00:16:31.93] spk_0:
It’s time for a break. Turn to communications. They have a bi weekly newsletter that I get. It’s called on message and they had something interesting in the, in the last one, it was five ways to find the timely hook and I’ve talked about news hooks with them that can be a great opportunity for you to be heard when there is some kind of a news hook. So how do you find these timely hooks, couple of their ideas track recognition days and months. I just did that in august, it was national make a will month and I did a bunch of content around that and there was, you know, there are days and months for everything like pickled day and a lot. So you can search for, you can search for the the recognition days and months, find something that fits with your work. Another one was just staying current with the news. They said they were gonna send their e newsletter on the day that queen Elizabeth died but they thought better of it because you’re not gonna be able to get people’s attention. People are just gonna be deleting emails more rapidly because they’re consumed with the death of the queen. So they held off a day or two. Um, and tying to a trend is another one that they suggested. Uh and they give the example of when um including salaries in job postings was trending and they used the example of somebody who actually wrote contrary to that idea. But it was timely because it was something that lots of people were talking about. So there’s a couple of ways of identifying the hooks, You can get their newsletter on message. Of course you go to turn hyphen two dot c o. Turn to communications. Your story is their mission now back to the tech that comes next. How is it that technology is not neutral? Amy

[00:16:36.71] spk_1:
well,

[00:16:37.09] spk_0:
humans, humans,

[00:17:26.74] spk_1:
I don’t think humans have the capacity to be neutral. And we are the ones creating technology. I mean even before digital technologies. You know, the number of um, pieces of farm equipment that could be considered technology, you know, humans built those that kill people who are left handed because the tool was built by right handed people to be used with your right hand, right? Like there’s there’s not a lot of evidence that humans can be neutral. And so then you add to that that we’re building it with a often very small group of people not talking about values for something that is meant to be you know, used in a different context with different people. It’s it just doesn’t have the capacity to be neutral. Let’s

[00:17:43.78] spk_0:
take something that’s so ubiquitous. It’s an easy example. Let’s take facebook. How is so somebody’s facebook is there, you can use it or not use it. How is that not just a neutral entity sitting there for you to use or not use,

[00:19:31.46] spk_1:
I mean you are welcome to use or not use facebook but just because you have the choice to use something or not use, it doesn’t mean it’s neutral. The platform is collecting your data is selling your data is deciding whether and how you can use the tool to connect with other people or to create groups, right? It is not allowing you the control over how your data and and use of the platform goes. So it’s kind of a false choice really. Um and for a lot of people, it is very much a false choice. There. There isn’t the feeling that they cannot use it if it’s the only quote unquote free tool that they could use to find certain resources or to otherwise, you know, talk and stay in communication with certain people, but at what cost, you know, and I think that’s the kind of conversation we’re trying to spark in the book is technology isn’t neutral, we just accept that and then we say and so at what cost at what harm are people having to make these choices around how they navigate technology? And we we have never presupposed in this book or in our lives that facebook or any other platform is going to necessarily make the choices that are best for the community and that’s why policymakers have an entire chapter in the book. You don’t need to be a text specialist or have a who is you know, technical background to be a policymaker that’s making smart protective policies that for users we need to say, hey people should be able to access and protect and restrict their data. Let’s make some policies around that. Right? Because the platforms are not going to make that policy themselves that that restricts them. Um and so I think again, all of these different groups together, get us to the tools that we need and not just the technology developers themselves,

[00:19:56.28] spk_0:
a few anything you want to add to that. Uh My my question about why facebook is not a neutral tool.

[00:20:50.18] spk_2:
I I think Amy gave a really good overview as to why technology and facebook in this case is not neutral. I think um you know, a lot of people now you’ll hear say they algorithm made me see it, the algorithm didn’t make me see something and that just also goes to the fact that someone has programmed the algorithm, someone has decided what will be given more weight or what will be given less weight, what will be emphasized won’t be emphasized. And so that then drives your interactions and the biases that the programmers have or the stated goals that the owners of the platform have then get seen to encoded into the technology that you use, whether it’s facebook or any other platform that then can affect how you interact, even if you do decide to often to using the technology as Amy mentioned, you always well not always, but you often have a choice as to which technology you want to use, what platform you want to log into, you want to engage with or not, but once you’re there, your choices are often limited in ways you might not realize because of the fact that technology is not neutral.

[00:21:20.02] spk_0:
We’re getting into the idea of oppressive systems which which the book talks about for you wanna explain. So, facebook may very well be an example, but what what what what’s oppressive systems generally,

[00:23:01.71] spk_2:
you know, I think one of the underlying themes of our book is that technology can really be used to enhance goals and to sort of enhance missions, and we argue in the book that we want to, you know, social impact organizations, especially communities to find ways for technology to enhance their mission, to help them accomplish their goals more often. But the reality is that technology again sits on top of people because it’s created by people and so to the egg extent in which extent to which um there are oppressive systems and society, whether that’s around how people get jobs or access education or access other resources, um that is then I can just be translated into the technology systems that then help facilitate our lives. It’s the same principles for different sort of outdated policies that have been rooted in unequal access. For example, if you just take those policies and write code then um that directly relates to policies, the new system, this technical system you’ve created has those same oppressive oppressive aspects in that system. And so again, when we talk about designing technology does need to come back to what communities are we designing for? Are we talking to them? Are we letting communities really drive that work? And through the development process are we really keeping in mind some of the historical context, some of the social context, some of the knowledge about biases and how that appears in different technology and what ties doesn’t have to how organizations function and how policymakers do their work, Um what we need to be funding to make sure that we have the time and the money to invest in a more inclusive process.

[00:25:40.82] spk_1:
I just want to add as I was talking about that um, and kind of trying to like hear our own conversation while while we’re in it and to share the reminder that while of course like facebook is this giant huge technology platform. Um, we are also talking about technologies that nonprofits make, you know, an organization that decided to have their staff or hire a web designer to help build something on their website that allowed users to complete their profile or to donate on their website. All of these things that organizations are doing with technology is also developing technology, right? It also needs to be inclusive. It should also have a lot of your community members and users part of that process the whole way, right. This isn’t just for for profit giant tech companies to hear this feedback, this is everyone including the way we fund our own technology inside of organizations, the way we prioritize or build or don’t prioritize or you know, don’t build technology and when we, when we think of it that way and you know, it’s just so easy, I think or I think it is easy to to say, oh my gosh, facebook is an oppressive platform, all of these things are horrible. It’s done all of these things. We can, you know, we could search for news articles from a decade of issues, right? But that kind of shifts the attention. Um, and acts like we as organizations don’t have any blame to share in that not that we’re sharing in facebook’s blame, but like we too are part of making not great decisions around technology, you know. Um there’s an organization that I I experienced this as a user on their website and had to give them some feedback that there they collected demographics as you’re creating your profile super common to do right? Um, their race and ethnicity category for like all humans that would answer this category only had four options total of all of the races and ethnicities in the world. There were four. Not one of those options was multiracial, not one of those was other. Let me tell you the thing you didn’t list here, right? You had to pick required question with four radio

[00:25:52.25] spk_0:
buttons.

[00:26:29.98] spk_1:
That’s that is that is harmful, right? Like you and maybe there was a good reason, not a good reason. Maybe there was a reason that you felt, you know, your funder makes you report in those four categories. I totally understand how hard it is to like manage your work as well as meeting all these funder reporting requirements. That’s something we talk about the book that is an issue. We need to go fix funders reporting requirements, but just because a funder says give us state in these four categories does not mean those are your four categories right? You have an obligation to your car community to be better than that. Um, and so I just want to name that as an example that we’re not just taking the easy route of complaining about facebook, which I would love to do for like five more hours.

[00:26:41.44] spk_0:
No facebook is not even facebook is not even,

[00:26:43.71] spk_1:
you know what I mean? Also trying to name it as something we’re doing inside our organizations to

[00:28:58.50] spk_0:
your example reminds me of the example you cite from jude shimmer who says, you know, she’s filling out a donation, they’re filling out a donation form and there’s no mx option. It was mr mrs Miss, I guess no mx um, by the way, you had several nonprofit radio guests quoted in the book, Jason sham steve hi jude. So I’m glad non profit radio brought these folks to your attention. You know, elevated their voices so that you, you became aware of them because you would not have known them outside. Well that’s elevating voices. That’s exactly exactly right. It’s time for a break. 4th dimension technologies, technology is an investment. Are you seeing this? You’re investing in staff productivity, you’re investing in your organization’s security donor relations because you’re preserving giving and all the actions and all the person’s preferences and their attendance and things. So you’re certainly investing in your donor relationships, uh, in your sustainability. So because technology is gonna help you preserve your mission into the future. So I don’t want to just throw something out and then not explain it. So see technology as an investment, fourth dimension can help you invest wisely. So, uh, make those savvy tech investment decisions. You can check them out on the listener landing page at Just like three D. But you know, they don’t want to mention deeper. Let’s return to the tech that comes next. All right. So let’s bring it. All right. So no, as I said, facebook is not mentioned in the book. I was choosing that as a ubiquitous example, but let’s bring it to something that is non profit created. Who wants to talk about. I kind of like the john jay college case because I used to do planned giving consulting for john jay, who, which of you knows that story better. Nobody

[00:30:43.41] spk_2:
looking at other resume. But I will, I will happen and talk about the john jay college example. So just briefly for folks who might not have read the book or gotten to that section of the book yet. Um, john jay college, an institution in new york city that had recognized that they had a lot of services geared towards making sure people finished their freshman year and started their second year, but not as many services geared towards people who, um, not as many services geared towards me, making sure people then ultimately graduate. And so specifically they had noticed that they had a large number of students or a not insignificant number of students who completed three quarters of the credits they needed to graduate but didn’t ultimately complete their degree and graduate. They partnered Data kind, which is an organization that provides data science and ai expertise to other profits and government agencies. Um so they worked with those data scientists to really understand their issue to look at the 20 years of data that the academic institution had collected. The data. Scientists ran about two dozen models, I think it was and ended up coming up with ended up developing a specific model specific tool for john jay college To use that identified students who are at risk of dropping out and potential interventions. The John Jay College staff then made the final determination as to what intervention would be done and how that would be done. And two years after this program was started at John Jay College credits the program with helping additional nine 100 students graduate. Um and so that is, I think, you know, one of the examples that we’re talking about of really the technology coming together with the subject matter experts really being used to enhance the mission and then really again, technology and humans working together to make sure that the outcomes are our best for everyone.

[00:31:04.33] spk_0:
There’s some takeaway there too in regard to ethics, the use of the data collection and use of the data. Can you talk about that? Absolutely,

[00:31:51.44] spk_2:
Absolutely, absolutely. As we think about data collect data collection data use data analysis, I think in general, especially in the social impact space, you want to make sure that you got consent when you collect the data that you’re collecting it in ways that make sense, that you’re not necessarily over collecting um you’re storing in the right way is protected in the right ways. Um and then as you need to do something with it, you can you can access it, you can use it as a way to foster communication across a different departments. I think one thing that was really exciting and talking to the john jay college staff as they said this program in that development actually force conversations across departments which if you’ve ever done any work at an academic institution, you know, working across departments on campus can be challenging and so sometimes the data can force those conversations and can also help strengthen arguments for the creation or um termination of different programs.

[00:32:15.79] spk_0:
Thank you because ethics is one of the one of your core values ethical considerations around around technology development and

[00:33:23.63] spk_1:
I think that’s I like that you’re bringing that up tony because I think it reinforces, I mean a fool was saying this, but just to kind of like explain those words when we’re saying that technology is there to help humans, it means that algorithm that was created is not moving forward and sending, you know, a resource or sending an inch invention to a student, it is not there to do the whole process itself, right? It’s there for its portion and then humans are looking at it, they are deciding, you know, who needs, what resources, who needs what intervention. And they then do that outreach right? Versus that idea that I think nonprofits especially think of all the time. Like if we just got the tool then this whole like thing will be solved and it’ll just like somehow run its course, you know, and like the robots will be in charge and that’s not great. We don’t need to do that. We’re not looking for robots to be in charge but also in this really successful example of technology being used, it’s still required people, you know, the technology isn’t here to replace them. It’s to do the part that we don’t have the time to do. Like crunch all those numbers and figure those things out and then the people are doing what people are meant to do, which is the relationship side, The intervention side, the support side, you know. Um and

[00:33:43.70] spk_0:
I just want to kind

[00:33:44.49] spk_1:
Of separate the two right?

[00:33:46.71] spk_0:
The tool was to flag those who are at greatest risk of not graduating after they have I think three quarters of the points or credits. Uh so so that

[00:33:58.99] spk_1:
that

[00:34:13.73] spk_0:
right, that that’s an ideal day. That’s an ideal uh data mining artificial intelligence task. Just flag the folks who are at greatest risk because we’ve identified the factors like I don’t remember what any of the factors were. G. P. A. I think was one. But whatever the factors are identified them now flag these folks. Now it’s time for a human to intervene and give the support to these to this population so that we can have 900 more folks graduating than than we expect would have without without the use of the tool.

[00:35:19.70] spk_2:
Yeah, absolutely. And just to continue to build on what Amy was saying. I think sometimes as nonprofits are considering technology or maybe hearing pitches about why they should use technology or why they should select a particular technology. It can be overwhelming because sometimes the perception is that if you adopt technology it has to then take over your system and and rem move sort of the human aspect of running your nonprofit and that’s simply not the case. You can always push back as to what those limits need to be sort of in general but also very specifically for your organization for your community. What makes sense? What doesn’t make sense? And so really prioritizing as Amy said, the using the technology to take advantage and to do those tasks that or just simply more efficient and computers are more capable of doing that while you use the humans involved for the more human touch and some of those more societal factors I think really um it’s important to emphasize that as leaders of social impact organizations, as leaders of nonprofits, you have that agency to sort of understand and to decide where the technology is used and where it isn’t used.

[00:36:57.67] spk_1:
Yeah, we, we were really conscious when we were working on the book to disrupt this pattern that you know, it’s like you learn a new word and then you see it in everything that you read. Um once, once we talk about it here, you’re gonna like go and everything you click on on the internet, you’re going to see it. But technology companies have been trying to sell us for a long time very successfully that their product is a solution and technologies are constantly using that language when you’re looking at their website, when they’re talking to you, you know, this is an all in one crm solution, this whatever, they are not solutions, they are tools and as soon as we, as you know, non profit staff start adopting that, they are the solutions, we then start kind of relinquishing the control, right? And thinking, oh well the solution is that this too, tool has all of this, It is just a tool, you are still the solution right? You are still the human and we, we didn’t want to have that language in the book. So you know, we’re always talking about technology as a tool because with, without humans needing to put it to work, it doesn’t need to exist. We don’t need to have a world that’s trying to make sure we can maintain all of this technology if we don’t need it anymore. Thank you for your service. Like please move along. We don’t, we don’t need that anymore. And that’s okay. We don’t need to feel bad that a tool isn’t needed anymore. It’s not needed. Great. We have different needs now, you know, um and changing that kind of dynamic and relationship inside organizations.

[00:37:24.77] spk_0:
A Crm database is a perfect example of that. It’s not gonna, it’s not gonna build relationships with people for you. It’s just gonna keep track of the activities that you have and it’s gonna identify people’s giving histories and event attendance and help them ticket etcetera. But it’s not going to build personal relationships. They’re gonna lead to greater support whether it’s volunteering or being a board member or donating whatever, you know, it’s

[00:37:39.88] spk_1:
not the mission, It’s not the food at the gala. Even if it sold the tickets to the gala right? Like it isn’t at all.

[00:38:13.47] spk_0:
So I, so I gathered so the Wiley did most of the writing on the book is what I gather because I managed a couple of quotes and nobody like nobody claimed them. So um and also the I I see there’s only two pictures, I like a lot of pictures in books. You only have two pictures and then you repeat the same two pictures from the beginning, You repeat them at the end and and they’re in black and white, they’re not even four color pictures. So there’s a little shortcomings

[00:38:15.70] spk_1:
that’s because in the book they could only be black and white, but in the e book they can, the one that’s meant to be in color can be in color.

[00:38:25.00] spk_2:
And also we knew that our readers have imaginations of their own and the words that we have on the page would evoke such strong images we didn’t want

[00:38:33.81] spk_0:
to overly

[00:38:34.58] spk_2:
provide images in the book.

[00:40:08.82] spk_0:
Very good, well played. Okay, it’s time for Tony’s take Two. I’m headed to the Holy Land in november. I’m traveling to Israel for two weeks and I’m wondering if you have suggestions of something that I should see? We can crowdsource my my sight seeing a few things that are already on my itinerary, of course the old city in Jerusalem um Haifa and the Baha’I gardens the Dead Sea and uh mitzpe ramon. You may have some other ideas, things that uh you found or places to eat, maybe that would be that would be great little uh terrific places that I should try in either Jerusalem or tel Aviv I’ll be spending a lot of time in, in those two places but also near these other, these other ones that I mentioned to Haifa So if you know a good restaurant eatery, I’d appreciate that too. You could get me at tony at tony-martignetti dot com. I’d be grateful for your Israel travel suggestions and anything else that you may recommend about Israel travel. I haven’t been there, so I’d be grateful to hear from you that is tony steak too. We’ve got boo koo but loads more time for the tech that comes next with Amy sample ward and a few a Bruce. Let’s let’s talk about another story. Talk about, let’s talk about, yeah, you, you all pick one, pick one of your case cases stories to talk about that that you like,

[00:44:39.78] spk_1:
I can talk about one since the flu already talked about one, but I was thinking because you already said it earlier, the food sector, so there’s one in there on rescuing leftover cuisine, an organization founded in new york. Um, and I think a pretty classic example of non profit trajectory like someone has personal lived experience they want to address, you know, make sure people don’t have the experience they had and create an organization kind of accidentally like they just start doing the work and they’re like, wait, what am I doing? Wait, we’ve just created a nonprofit, you know, and and kind of want to build because they start to have success actually doing the thing that they set out to do. Um, but like many nonprofits you reach the limit of human scale, like you get to the, this is only the number of people I can personally talk to or physically carry food, you know from one restaurant to to a shelter or whatever. Um and realize, oh we’re gonna need some tools to help us make this thing work. Um and grow beyond just the handful of initial people and also like many nonprofits, that was a very reactive process, right? Like oh gosh, we need a calendar tool, here’s one, oh gosh, we need a, you know, a phone tool, here’s one and not what is the best, you know, what what do we really need? How do we solve these goals? So they found themselves a few years in with very common nonprofit sector, like little patchwork, you know, all different kinds of things. They’ve kind of forced and often the the integration to use the technical term, the integration between tools was humans like answered the phone and then typed it into the tool because the person on the phone doesn’t have access to type it into the schedule er right? Like I they were having to be the tech integrations as humans, which meant humans were not doing human work, right? Humans were doing work that that the robots should be able to do. Um and that’s when they brought in more strategic dedicated technology. Um staff helped to build and again, what they didn’t really realize at first is they were building a product, you know? Um I think this is a bigger conversation of you and I have with organizations is we are we have products, we’ve built products. It’s not bad. And I think especially in the US, we’ve come to think that product is like a for profit word and we will have nothing to do with it. But what it just means is like it’s a package, it is a thing that’s doing what it’s meant to do. And we should think about how we make sure it works and who can access it. And you know, we bring some strategy to it. Um, but their process is really what drew us to including them in the book. They had a really inclusive process where all the different folks from, you know, that were users. So the volunteers who physically like went to the restaurant and picked up that food and and took it to an agency, the people in the agencies, the people in the kitchen of the restaurants, all those different people were able to say, oh, I wish the tool did this. I wish that I could do this every day when I need to pick up food. I wish I could get this kind of message. Everyone was able to give that feedback and then see everybody else’s requests so that as the staff and community and the tech team prioritized, okay, well what works together? What can we build next? What’s in line to be built next? Everyone had transparency. Everyone could see that everyone understood, okay, my thing is last or like I know why my thing is last, right? Like people could really see and give feedback and be part of the process the whole time kind of back to the very beginning of this conversation with us said, even if they were not the technical developers themselves, they had important expertise, Right? It was good to know, oh, these five different restaurants all want the same thing, what’s happening, right? Like what is the thing that’s happening for restaurants trying to offer food? Let’s figure that out. We know who to get feedback from, you know, um, we’re just such a wonderful example of people really having everyone involved in the whole process. Um, and as they have done that and continue to do that, they were able to move people out of, you know, answering the phone to type into the calendar and move people into human jobs. Um, grew the organization, it’s now in eight different cities in different states. Um, and that’s just more of the mission happening, right? Because technology was invested in in the right kind of way.

[00:45:02.73] spk_0:
So takeaways are transparency in prioritizing development inclusiveness, including

[00:45:10.61] spk_1:
the, including

[00:45:11.71] spk_0:
the community, all the, all the different

[00:45:14.65] spk_1:
people

[00:45:15.63] spk_0:
who are impacted, giving them agency

[00:45:18.80] spk_1:
to

[00:45:19.70] spk_0:
contribute and not not have it developed.

[00:45:24.33] spk_1:
Yeah. And they had,

[00:45:25.28] spk_0:
I don’t know how much

[00:46:40.74] spk_1:
of this made it into the book, but you know, in talking with them and having conversations, you know, there were a number of times where the thing they were hearing from, all these different users that needed to be prioritized wasn’t something as staff, they maybe would have identified or at least prioritized, but when you’re really listening and having the community drive that development, you know, is that what you’re investing in is actually going to make it better for your community, right? It’s the thing that they’re asking for versus you saying, Gosh, we have, you know, what’s next on our development docket, wonder what we could build, Like let’s think of something you’re not kind of guessing, you know, exactly what needs to be built and that’s kind of reinforcing for your users that you are listening that you are valued that they want this to be as good of an expiry as possible for you, right, Which is really kind of um bringing people in closer and and I think we all know, especially tony as the fundraiser, like keeping people, it’s a lot easier than bringing in new people. So if you can keep those partners in great, you know, you keep those volunteers in instead of having to recruit new ones because you’re burning them out because they don’t like working with you, it’s not a good experience, you know? Um yeah,

[00:47:26.71] spk_0:
let’s talk about the funding, but but not from the funders side because most of the very few of our listeners are on the, on the funding side, they’re on the grantee side and so from the, well the book, you talk about social impact organizations, but this is tony-martignetti non profit radio not tony-martignetti social impact organization, radio So so if we could use, please use nonprofits as an example in their funding requests, they’re doing grants, what what can nonprofits do smarter about requesting funds around technology, the development and the use that’s going to be required for the, you know, for the, for the project that they’re trying to get funded.

[00:47:32.08] spk_1:
Yeah,

[00:47:32.45] spk_2:
absolutely. This is a question that Amy and I have gotten so many times since the book has come out.

[00:47:42.97] spk_0:
Okay, well I’ll give you a milk toast bland ubiquitous question that not that

[00:49:01.18] spk_2:
it’s a milk toast question, but it is one that is so important to organizations and that even for non profit organizations that have thought about technology before, then the question becomes how are you going to get it funded right? And so, um, it’s an incredibly important question. And so I think that there are a couple of things that non profits can do. One is to seek out funders who are explicitly funding technology, we’ve seen an increase I think over the past several years in different foundations, different companies who are specifically funding technology and so looking for those types of funders. Um, I think it’s really important, I think then another thing to do is to really make the case as we make in the book that um, funding technology is part of funding programs of the organizations and part of funding the running of the organization. Um, it’s not simply an overhead costs. That is a nice to have that. If you get around to it, you can do it, but really you need to have strong technology and data practices in order to design your programs to run your programs. Um people, you know, are used to being out in the world and interacting with technology in certain and so when they come to your nonprofit, they still probably would like to have a website that sees them that recognizes them. That’s useful. They might like to know how to get connected to other people in your community, other staff members and what those communication technologies might look like and more. And so really looking for ways to write technology into program design as non profits are doing that

[00:49:25.77] spk_1:
as well. And

[00:49:25.97] spk_2:
then I think thirdly, just being connected with other nonprofits through organizations such as N 10 and listening to other great podcasts such as this one um to hear what, what other nonprofits are doing and what’s been successful as well. And applying some of those techniques to your own organization.

[00:49:47.95] spk_0:
I feel bad that I gave short Shrift to the, to the foundation listeners. So, I mean there’s there’s lessons in what you just said. Um, are there one or two other things that we can point out for uh for foundation listeners that to raise their consciousness.

[00:51:25.89] spk_2:
Absolutely. Um, I think one of, I think, you know, there are many things about technology that can be funded, especially with nonprofit organizations. And I really encourage foundations to think about what it means to really fund that inclusive innovation process and to fund when I say innovation. I mean recognizing that version one is might not be perfect. And so funding version 1.1 and 1.2 and version two point oh, is just as valuable as funding version one. We see this all the time in the private sector that, you know, my phone gets updates on a regular basis and I still have a, and that’s okay. And so really wanting to make sure that funders recognize that we don’t need to just create new technology every time for the sake of creating something new, but really allowing the space for that iteration and really adjusting to the community needs is really important. I think also making sure that we’re funding inclusivity and so that can be things such as uh compensating people, you know, from the community for time, um, as they are involved in this development process, making sure that there’s money in the budget for all staff, not just a member of the tech team to get training on technology, but there’s money for all staff to get training on the different technologies that the organization is using. Um, and also the timelines that are given to nonprofits doing their programs allows for that really critical community listening and community input process into developing any technology and then ultimately developing and executing programs,

[00:51:49.02] spk_0:
I’m glad you just used community as an example because I wanted to probe that a little deeper how

[00:51:55.99] spk_1:
I

[00:52:11.32] spk_0:
guess, I guess I’m asking how you define community because you say that, you know, technologists and social impact or eggs and policymakers and communities can can be should be more involved in uh, technology development. How are you defining communities there?

[00:54:23.04] spk_1:
We’re not in a way because technology that N 10 builds for, you know, the community that that we have is very different than um, you know, that would be a bunch of nonprofit staff from mostly U. S. And Canada, but also all over the world, um of all different departments. Right? That that would be the community that intent has, but the community around, um, you know, the equitable giving circle in Portland. Well, that’s Portland’s specific very, you know, geographically different than the N 10 community. Um, it’s folks who can do monthly donations that want to support, uh, you know, black community in Portland, it community is meant to be defined based on what is trying to be built and and for whom it’s meant to be used. Um, and that’s going to be flexible, but I think where it really comes in is what we talked about in the book, in the funding section, but also all of the sections is what does it look like when we expect that transfer to community ownership is the final stage of technology development. Right. And so if that is the final stage, if um the community, you know, owning the technology that was developed by someone, um is the final step well, there needs to be a level of training and an investment that is very different than if you’re planning to keep this privately yourself the whole time, right? If you’re going to turn it over to the community to own it and maintain it, you’re going to be investing in that community in the process in a very different way. You’re going to be including people in a different way. You’re going to be thinking about knowledge transfer, not just technical transfer, right? Um and so that relationship with the community is inherent to the goal at the end. And I think that’s for us, part of what is so important about thinking about that big question of what does it look like for community to really own technology? Like even in the biggest widest sense, because right now, We as users don’t own the Internet, right? Really, there’s there’s 45 million people just in the us that can’t even access broadband. So the idea that the any of these tools, even in the widest biggest, you know, most access sense are are collectively owned isn’t real. And so that goes back to community, but it also goes back to policy, it goes back to how we’re investing in these tools, what values we are even using when we, when we access them? Um, that’s the whole book right there, I guess.

[00:55:00.40] spk_0:
Uh, the book is also, uh, a lot of questions. I always hope to get answers. When I read books this, this book, lots of questions questions at the end of every chapter and then they’re compiled at the end. They’re organized differently at the end. Why did you take that tack?

[00:56:06.65] spk_2:
Absolutely, yes. Our book does perhaps answer some questions, but it does provide questions. And that’s because what this work looks like varies based on the community you’re in based on your nonprofit organization, based on your role as a policy maker based on your roll thunder perhaps. Um, it varies. And so what your specific solution will look like. There’ll be some of the same building blocks, but the actual techniques you use will need to vary. And so the questions that we have at the end of each chapter at the end of the chapter on social impact organizations. For example, there are, I think 25 questions and five of those are questions that you ask someone as a nonprofit can ask of other nonprofits about technology. You as someone as a nonprofit can ask of your funders to start that conversation with some funders that we were just sort of summarizing now. What are specific questions that you should be asking of your funders were specific questions you should be asking of technologists that come to you and say, have we got a solution for you? Um, what are specific questions that you should be asking? Policymakers? Um, within the realm of what’s allowed for nonprofits to do part of the policy making process. And what are some real questions that you can ask of the communities that you serve and the communities you partner with to really get out, what are their needs and how might that tie to some of the technology needs for your organization?

[00:56:43.69] spk_0:
So what have we uh, what haven’t we talked about yet? That, that either of you would like to, uh, you feel like I’ve spent enough time on the well, here, I am asking you and then I’m proposing something. So I’ll cut myself off what, what what would, uh, whatever we talked about yet, either of you. That

[00:58:18.09] spk_1:
I mean, I think one thing that we have experienced is that there are some topics like how do we do this or how do we fund this or how do we make change? Um, you know, there’s some topics that recur throughout a lot of conversations, but ultimately, we have never had the same conversation about the book twice because that’s part of writing a whole book. That’s just questions, you know, and isn’t all the answers that isn’t Oh, great. You know, turn to chapter three where we list the 10 things you need to do tomorrow? Like there are no, I mean there’s probably 100 things, right? But um because of that, what we wanted to do when we wrote the book, even if, you know, we said at the beginning, even if no one reads this but ourselves, we want to feel like we are starting a conversation that we are just going to keep starting and keep having and keep getting closer to figuring out what’s next because it’s gonna be a whole long path. Um, and if it if we’re here to write a how to book that, who are we to write that? Right? Who are we to write the how to book on how we completely change the world? But what if we wrote a book that said, y’all, how do we change the world? Like really truly how let’s go, let’s go figure that out that motivates us. And so if it motivates us, it probably motivates others. And these conversations, I mean, I just love them because this yes, we had some of those recurring themes that all of us think about all the time. But this was a completely different conversation than we’ve had before and that, well, you know, different than we’ll have tomorrow. And I think what we’ve talked about the two of us is when we have

[00:58:31.93] spk_0:
not only not only different, but better,

[00:59:15.21] spk_1:
but when we have opportunities to talk about the book together with folks like you knowing that people are listening, right? Thousands of, of non private radio listeners, we want to, in a way have this be like a practice session for all of them so that when they finish the podcast and they go to their staff meeting, they’re like, hey, a food amy like never had their sentences thought out before they started probably said a million times. The bar isn’t high. I can just start asking questions, right? That’s why we have all the questions at the end. I can just start talking about this. There is no perfect, perfect doesn’t exist. So let’s not worry that I don’t know the exact way to talk about this technology project. Let’s just start talking about it and and get in there and have these conversations that we have almost model that process of just practicing the work of, of changing things.

[00:59:33.45] spk_0:
Anything you would like to uh leave us with anything we haven’t talked about that you would like to,

[01:00:00.54] spk_2:
you know, the subtitle of the book talks about building a more equitable world and we call out a few specific roles. But really I think it’s just important to recognize that we all have a role to play in building a more equitable world. And so if you see something in this world that you want changed. Hopefully this book does give you some real ideas about how you can go about doing that, some real questions to ask to find other people who can help you along that journey because really building an equitable world is an inclusive process and that includes you. So that’s that’s all I would add.

[01:00:43.80] spk_0:
She’s a for Bruce at a few uh underscore Bruce, her co author is Amy sample ward at Amy R S Ward and you’ll find the book the tech that comes next, how change makers, philanthropists and technologists can build an equitable world at the tech that comes next dot com. Amy, thank you very much. Pleasure.

[01:00:46.45] spk_1:
Thanks so much Tony.

[01:00:48.33] spk_2:
Thank you.

[01:01:39.63] spk_0:
You’re welcome. Thank you. Next week. Gene Takagi returns with Trust in nonprofits. If you missed any part of this week’s show, I beseech you find it at tony-martignetti dot com. We’re sponsored by Turn to communications pr and content for nonprofits. Your story is their mission turn hyphen two dot c o. And by fourth dimension technologies I. T. Infra in a box, the affordable tech solution for nonprofits. tony-dot-M.A.-slash-Pursuant D just like three D. But they go one dimension deeper. Our creative producer is claire Meyerhoff shows social media is by Susan Chavez. Marc Silverman is our web guy and his music is by scott stein, Thank you for that information Scotty B with me next week for nonprofit radio big nonprofit ideas for the other 95% go out and be great