F#ck AI and the Horse It Rode In On: 5 Ways AI Is Destructive to Grant Work
Guest Blog By Melody B. Hernandez
Photo by Melody B. Hernandez
Note: We ultimately don't agree with the conclusion of this post, but we do agree that AI is being misused and that the misuse often hurts all of us. This blog includes many cautionary tales about AI being used as if it can *think* and make decisions independently.
While I’m no spring chicken, this is not an article by some old person bitching about having to use technology. I’m what the internet affectionately calls a geriatric millennial. At least I think it’s affectionately. Geriatric millennials are people born in the very early 1980s. This micro-generation is defined by a lifetime of adapting to new technologies and often served as guinea pigs as new technologies unfurled. Our lives are defined by the transition from an analog world to a digital one. Most of us were the first in our communities to have computers integrated into school learning from elementary school onward. We love to Google things because we remember a time when we had to march our asses down to the library and dig in volumes of books to answer our burning questions. We remember the pre-Urban Dictionary days when we had to ask an older sibling/cousin/friend an embarrassing question and had to hope they wouldn’t tease us with an inaccurate response (*ahem* thanks, Besty, for those fun memories).
While I am not the most eager to switch to new technologies, I am most certainly not anti-technology. I have not had a landline since I lived on campus my sophomore year of college. I jumped on all the social media bandwagons from Snapchat to Bluesky. I switched to Gmail as soon as Hotmail became uncool. I use a plethora of technologies for fun, from photo editing to digital drawing apps and more.
I’m on Discord for f#ck’s sake.
So when I say I f#cking hate AI and I am f#cking sick of it being rammed down my throat everywhere I turn, I have my reasons.
It is not the usual annoyance of having to learn new technology; it’s this specific new technology, how it is being used, and the ethical and environmental implications that ensue with its use. Here are five reasons behind my anti-AI stance:
1. It is removing humans from human-centered work.
I recently helped some clients submit grant applications to the local Human Rights Commission. The RFP* was a mess that resulted in hundreds of Q&As and a higher-than-average number of hours devoted to each proposal. Now, it’s common for nonprofit folks to complain about RFPs, and rarely ever do you hear, “Wow! What a well-written and easy-to-understand RFP!!” But this was something different. My colleagues and I, who live and breathe grant work, were on the phone with each other like, “What is happening here????” Folks who are newer to grantwork were reaching out to me for support, almost in tears because they needed the money, but were overwhelmed by the pages and pages and pages of confusing content that somehow seemed to say a whole lot while simultaneously saying nothing at all.
Then we noticed the last page of the RFP which read:
Disclaimer – Use of Generative AI Assistance
This RFP includes material generated with the assistance of generative artificial intelligence (AI) tools. All AI-generated content has been thoroughly reviewed, verified, and refined by HRC staff to ensure it accurately reflects the intentions, priorities, and compliance requirements of the HRC.
In preparing this document, we adhered to best practices, including:
• Fact-checking: All AI-generated content was carefully verified for accuracy.
• Disclosure: The use of generative AI technology in developing this content has been fully disclosed.
• Data Sensitivity: No sensitive or confidential information was entered into any public AI tools
There’s something dystopian about the Human Rights Commission removing humans from their funding disbursement process. The RFP was focused on human-centric work: behavioral health, creating pathways into college for youth from marginalized communities, basic needs supports for families struggling to survive. Not only was a computer program – a glorified algorithm – tasked with creating the framework for this human-centered work, it also seems like it will be tasked with supporting the decision making process as evidenced by the demand that all proposals be submitted in Word format when the protocol to date has been to submit everything in PDF.
I had one client send me a draft they had crafted internally that included some information that concerned me, so I reached out to them to point out the discrepancy between their proposal and my understanding of eligibility. They informed me that they had fed the RFP and all the Q&As into ChatGPT, along with their previously written grant proposals, and that ChatGPT had assured them the information was aligned with the guidelines. I informed them that I, an actual person with comprehension skills, had attended a live session and had been informed verbally about this specific guideline.
The thing is, generative AI does not understand politics or hints given with a nudge- nudge-wink-wink when there are things that a funder does not want to put into writing. It doesn’t understand nuance or the grey area where most humans live and work, reducing everything to black and white and taking everything at face value. In a quick phone call, I was able to advise some tweaks that ensured a better-aligned and more competitive proposal.
2. It is not good at what it does.
Working with AI to do grant work right now is like trying to make Thanksgiving dinner with the help of an eager 4-year-old. Love the enthusiasm, but now is not the time and get out of my damn way.
Here are some examples from this week alone where I caught AI f#cking up royally (and this is coming from someone who is doing my damnedest to actively avoid it):
The AI summary my email gives me (which I didn’t ask for and can’t seem to turn off) summed up an email requesting a meeting with me later that day as, “Alana too busy to meet, meeting cancelled.” She had, in fact, asked to proceed with our scheduled meeting at 2 pm unless I wanted to cancel the meeting. Had I not opened the email and read her wording for myself, I could have missed an important call that would have resulted in an unnecessary delay for the grant project we were meeting to discuss.
I accidentally clicked on the not so helpful suggestion to have AI re-write a carefully worded email with thoughtful feedback regarding a clients attempt to write a grant proposal, replacing it with a saccharine and overly-positive response that missed all the nuance of corrective actions I had suggested that would have helped them to hone grant skills without offending. I had to re-write the entire email.
Google’s A. Overview confidently told me that the acronym my client uses for their organization’s name meant something completely different and that they work in a completely different field. Mind you, I had conducted that same Google search with the exact same phrasing on multiple occasions in the pre-AI Overview days, and had never seen search results that far off base.
When funders rely on AI to write RFPs, we end up wasting countless hours trying to understand and they end up having to work double time to field hundreds of questions and provide additional clarification. When they feed the proposals into AI for summaries, they are likely to get faulty, inaccurate information like the AI email summaries provide. When they search for additional information, they are more likely to be led astray by inaccurate AI Overviews. This all opens the door for a slew of problems that range from the funding not reaching its intended goals to having to reissue RFPs to potential lawsuits.
Like a four-year-old in the kitchen, there is hope for the future. With practice over time, that four-year-old can hone their skills and learn and grow to become helpful in the kitchen. And that four-year-old is probably already doing better in the kitchen than when they were three and a half. Yet, I still maintain that Thanksgiving is not a time for them to try and help out. Is grant work the Thanksgiving of cooking? I believe so. It is the process we currently use in this country to determine how billions are spent. But more than just the dollars, grant work is how we develop important community work that addresses societal issues ranging from feeding people to researching diseases and cures to providing mental health supports. Misallocation costs our society dearly in the long run.
3. It’s horrible for the environment.
I have toyed with trying to be a part of the solution, meaning working with the many wonderful people who are trying to work out the AI kinks and get it set on the right path. However, I keep coming back to the environmental havoc that it is causing. Recent articles have highlighted how much energy is being used by simply thanking ChatGPT. Rolling blackouts are predicted for the areas where generative AI data centers are located. The water used to cool the data centers is decreasing our global access to clean water.
Grant professionals are working diligently to secure funding to address the very environmental issues that generative AI is fueling.
While I am acutely aware that no one person can solve the climate crisis and the onus of responsibility belongs on the shoulders of people in power, I do not feel good about leaving the water running when I brush my teeth. Nor do I feel good about using AI systems that I know are damaging our planet at a faster rate than their non-AI peers.
4. It’s biased.
AI is biased. It is not leveling the playing field, it is reinforcing system inequities. Chapman University has outlined several stages where AI bias can and does occur, including: Data Collection, Data Labeling, Model Training, and Deployment. One example they cite is using AI for hiring and feeding it historical data that results in a bias toward hiring male applicants. Selection bias can (and does) occur when the program is fed limited data sets not representative of real world populations.
IBM provided a succinct description of the issue of bias in AI algorithms:
The models upon which AI efforts are based absorb the biases of society that can be quietly embedded in the mountains of data they’re trained on. Historically biased data collection that reflects societal inequity can result in harm to historically marginalized groups in use cases including hiring, policing, credit scoring and many others.
Returning to the local Human Rights Commission’s recent use of generative AI described above: The dollars being disbursed through the disastrous RFP were a part of the Dream Keepers Initiative, funding reallocated from the local police budget to community work that supports and uplifts the local Black community so as to (attempt to) address the system inequities that result in overrepresentation of Black San Franciscans in the justice system. The use of generative AI, which has been proven to reinforce systemic inequities based on its program pulling from historic data that is rife with racial bias, has no business being used for funds whose purpose is to address racial disparate funding policies and practices.
That being said, is there ever an appropriate time to use biased systems?
5. It diminishes the positive impacts that grant work has on programs and organizations
One reason I love grant work is my belief that by going through the process, we make more impactful and stronger programming. When approached with thought and care, the questions provide an opportunity for organizations to think through really important aspects of program design and implementation: The purpose of the program, how to define success then how to measure progress toward that aim, how to account for the costs associated with the program and ensure that the organization has the resources it needs to create positive change in the community.
However, when funders pump out RFPs using generative AI and give us a couple weeks to respond (and often limiting overhead to an unfeasibly low level), we have less time to develop an impactful program. When the focus is on words that an algorithm will recognize and report as good or bad, as fundable or not fundable, it is taking focus away from whether or not the program works. Generative AI cannot yet determine if there is a throughline that will bring the program to its goal so in relying upon it, we lose that vital aspect of grant work and its associated positive impacts in the community.
Ultimately, with all these negative impacts, I keep coming back to: Why?
Why are we using AI for grant work in the first place? To reduce the time and energy that people put into this work? To save money? To pump out RFPs and proposals faster?
Because there are better ways to reduce the time and energy that people put into this work. Namely: Using existing proposals and trust based philanthropy.
There are better ways to save money: Since trust based philanthropy requires less administrative staff time, it reduces overhead and saves on costs. We could save money on electric bills and the future costs of cleaning up the environmental havoc left in AI’s wake. We could save more money by developing collaborative approaches to societal problems that competitive and restrictive grant processes prevent.
If someone could give me a good reason why AI is better than other options – and options that are also more environmentally friendly – then I would be more than happy to work with it. However, it is becoming more and more common for funders to write RFPs using generative AI that are then read and summarized by generative AI to then write a proposal using generative AI to then have the funder feed the response into AI for summaries that are used to decide who gets the funding. Isn’t there a better way? Is this a 2025 example of “This meeting could have been an email?” Like, “This AI back and forth could have been a phone call?” Or perhaps we could have just given each other a few sentences of program info in exchange for a few sentences of funding priorities. A slideshow? An interpretive dance? All of which are infinitely better for the environment, a much more productive use of time, and result in better decision making for funding.
* RFP stands for Request for Proposal. This is the document that a funder releases that explains how to apply for the grant money, the purpose of the grant, etc. It is the directions and the guidelines and for government funding, it is often a very long document that can include dozens to hundreds of pages of instructions.
Melody Hernandez
Founder/Principal
Root Reach Rise, Inc
415-350-9048