Skip to Content

Making sure AI is Worthwhile

Why ‘Worthwhile AI’ Is the thought model Our Sector Needs

A better frame

If you’ve been anywhere near CAST’s work on AI over the past year, you’ll have encountered the idea of Worthwhile AI — a question and a mindset that keeps mission, community and values at the centre of every technological choice. As the pace of change accelerates, we think this is one of the most useful frames the sector has for navigating AI safely and intentionally.

At DEV, we’re completely aligned with this approach. In fact, worthwhile-ness is fast becoming our primary test for whether AI deserves a place in a project: does it advance the mission? Does it respect the people served? Does it avoid or reduce harm? And crucially — is it genuinely better/safer/less damaging than the non-AI alternative?

Start with strategy, not tools

This is why we always begin with need, not novelty. Through discovery, service analysis and Wardley Mapping, we establish the real strategic drivers:

  • What outcomes matter most?
  • Where are the bottlenecks or unmet needs?
  • Which capabilities are missing — and is AI the right way to supply them?

This planning discipline keeps us grounded. It prevents “AI for AI’s sake” and ensures that experimentation is tied to impact, not hype. It’s exactly the spirit of CAST’s Worthwhile AI: you don’t chase technology; you consciously choose it because it serves your mission and community.

Workshopping AI solutions
Working with funders to identify best uses for emerging technology

Measuring impact through ISO 14001

Another reason the “worthwhile” framing resonates with us is that it aligns beautifully with the environmental lens we use through our ISO 14001 process. That standard commits us to understanding, measuring and reducing the environmental impact of the work we deliver — and AI is rapidly becoming an area where the environmental footprint can’t be ignored.

When you ask Is this worthwhile? you naturally also ask:

  • What’s the energy cost of this model?
  • What additional water use sits behind training and inference?
  • Can a simpler, smaller, or more targeted model achieve the same outcome?

It just fits so well.

Strengthening trust through ISO 27001

The worthwhile framing also sits neatly alongside the security and governance commitments we uphold through ISO 27001. If ISO 14001 anchors us in environmental responsibility, ISO 27001 anchors us in safeguarding people, data and organisational integrity — and AI raises new considerations on all three fronts.

When you ask Is this worthwhile? you’re also implicitly asking:

  • What data will this model touch, transform or store — and do we have a lawful, ethical basis for that processing?
  • Are we introducing new attack surfaces or dependencies through third-party AI services?
  • What controls, audit-ability and assurance do we need to prevent misuse, model inversion, or unintended disclosure?
  • Can we redesign the workflow so the model operates on less data, anonymised data, or no personal data at all?

AI systems, especially those delivered via external APIs or opaque model pipelines, can amplify risks unless they’re evaluated through a structured security lens. ISO 27001 gives us that discipline: risk assessment, control selection, supplier evaluation, incident readiness, and a culture of continual improvement.

What we’re hearing from the sector

Our conversations across charities, NGOs and social change organisations echo CAST’s findings. Leaders are increasingly weighing:

  • Privacy risk — Will AI introduce new vulnerabilities? Are we exposing people to harm?
  • Energy and resource usage — How sustainable is this technology? What hidden costs sit behind it?
  • Mission fit — Does AI genuinely strengthen delivery, or does it distract from what already works?

The Worthwhile AI framing helps teams hold all these tensions in view. It keeps the conversation practical rather than abstract, hopeful rather than fearful, and rooted firmly in service of communities rather than service of technology.

A shared direction of travel

We see CAST’s work as a critical compass for the sector. By championing worthwhile-ness instead of hard definitions or rigid checklists, they’ve created a question that adapts as the landscape shifts. It’s a principle we use ourselves, and one we encourage partners to adopt when exploring AI for the first time.

If this thinking continues to spread across the sector, we’ll see more confident experimentation, more community-led decision-making, and more responsible innovation — all grounded in mission, care and impact.


Does that feel right?

If you agree the most important question isn’t Can we use AI?
It’s Is this use of AI truly worthwhile? and want to see what we're working on please let us know.