Skip to Content
Daniel

Beyond the Hype: What we learned from our first two AI solutions in grants analysis

By Daniel Fairhead, Lead Developer at The Developer Society

A large international grantmaker came to The Developer Society with a challenge. They had over 10,000 grant applications in their database, along with detailed information about grantees and project outcomes. The problem? Their proprietary database system had such a clunky interface that extracting meaningful insights was practically impossible.

They asked us to explore whether AI could help unlock the value hidden in their data. Here's what we learned from three different approaches.

Experiment 1: RAG (Retrieval-Augmented Generation)

About a year ago, we started with what was then the cutting-edge approach: a RAG document library.

How it worked:

  • We imported all grant documents into a vector database (Postgres)
  • When users entered a query, we'd find the top 10 matching documents
  • The AI would use those documents as context to answer the question

The result? It didn't work.

For specific queries like "Which grantee received funding to work with orphans in the Gambia?", it performed well. The system would match the query to the relevant grant and provide an answer.

But ask something broader like "What topics are we commonly addressing in sexual health clinic grants?" and the cracks showed. The system might match "sexual health" but would only load a limited number of documents, easily missing important topics. We tried increasing the number of matches, but quickly hit the context limits of the LLMs.

AI is one tool among many, and you need to understand how to apply it correctly. Your data need to be in the right shape for the tool you’re using. You can’t use a bench saw to cut a hole in an installed worktop—you need a jigsaw. And you wouldn’t use a jigsaw to cut 100 planks to length—for that, you want a bench saw.

Experiment 2: "Infinite context" with map-reduce

Our next approach used a map-reduce strategy to work around context limitations.

How it worked:

  • We'd take the user's question and 'map' it across every single document
  • Each document would produce a small summary in response to the query
  • We'd run 30-50 queries simultaneously, then combine the summaries into a final report
  • For a question like "What topics are we commonly addressing in sexual health clinic grants?", we'd ask each document: "Does this mention anything related to this question? If yes, summarise it. If not, return 'N/A'."
  • Then we'd batch together all the non-N/A summaries and ask a fresh LLM to combine them into a single answer.

The result? Limited success in some queries, but major drawbacks.

Running this across 10,000+ grants was slow and extremely expensive. We added manual filters so users could narrow things down by application round or other criteria, but this defeated the purpose of having AI do the heavy lifting.

Another showstopper issue was counting. LLMs are notoriously bad at maths. They work by building statistically likely answers rather than actually performing the task. Ask "How many Rs are in 'Strawberrrrry'?" and they'll probably say "3" because that's the statistically common answer to questions shaped like that.

So when we asked "How many grantees are addressing sexual health in Albania?", the system would review every grant and summarise each one, but couldn't accurately count them. We tried various adaptations, but couldn't get the speed or accuracy our partner needed.


Don't waste budget on slow, expensive solutions; we partner with you to find the approach that actually works.


Experiment 3: Tool-based approach

Over the past year, LLMs have developed a powerful new capability: using tools. As a developer, I can write functions, explain to the LLM how to use them, and let it decide when to call those functions.

We wrote tools that could query the grants API directly. The API documentation was awful and the system itself was slow, but—surprisingly—it worked.

How it worked:

  • Given a question like "How many grantees are addressing sexual health in Albania?"
  • The LLM reviews its available tools and finds one for reading API documentation and another for querying the API
  • It reads the documentation to learn how to filter by topic and country
  • It uses the API tool to get a count of all relevant topics
  • It identifies topics related to sexual health
  • It uses the API again to count grants related to each topic
  • It returns the answer to the user

The result? Success! Mostly.

It works, though it's not lightning-fast. Each API call requires waiting, and the LLM needs to think about the results before deciding what to do next. A single question could easily take 10 minutes to answer.

The main challenge is the API itself—it's extremely limited and awkward. We had to add considerable detail to the system prompt explaining its quirks and limitations, and then the LLM often needed to make 15 or more calls to the API to get the information it needed.


RAG approach

Best for: Specific, targeted queries

Struggles with: Broad pattern recognition

Speed: Fast

Map-reduce approach

Best for: Thematic analysis across thousands of documents

Struggles with: Precise counting, cost at scale

Speed: Slow

Tools approach

Best for: Quantitative queries with existing APIs

Struggles with: API quality and documentation

Speed: Moderate (very API dependant)


What we learned

AI is genuinely interesting for analysis work, but it's not magic. Even AI needs data in usable formats to produce useful results.

Despite what much of the AI hype suggests, you can't just "throw your data at AI" and expect brilliant insights. That's simply not how it works.

AI is one tool among many, and you need to understand how to apply it correctly. Your data needs to be in the right shape for the tool you're using. You can't use a bench saw to cut a hole in an installed worktop—you need a jigsaw. And you wouldn't use a jigsaw to cut 100 planks to length—for that, you want a bench saw.

This is where The Developer Society can help. We're experts in using technology efficiently and effectively, with years of experience and genuine passion for the charity and broader third sector. We partner with you to find the right solution, rather than just providing a service designed to extract as much money as possible.

If you're sitting on a database full of valuable information but struggling to extract insights from it, let's talk. We'd love to help you find the approach that actually works for your needs!