10 reasons nonprofits, charities and purpose-led organisations shouldn't use Large Language Models

10 reasons nonprofits, charities and purpose-led organisations shouldn't use Large Language Models

Innovation team

The last few months have seen incredible acceleration in the development and adoption of Large Language Models (LLMs). It took a mere five days for ChatGPT to reach one million users and just two months to reach 100 million. That’s astounding.

We’re living through technological disruption at a speed our team has never experienced. Hundreds, if not thousands, of new LLM applications are being explored and launched everyday - it’s hard to keep count.

AI “Experts” are emerging left, right and centre, many of whom are steaming ahead, praising the benefits of speed, efficiency and potential progress for society.

So we should harness the power of AI Systems and the benefits these bring to our organisations and those we exist to help, right? Not so fast. Have you paused and asked if you should engage at all?

For the past few months some of our team have been researching the AI landscape space in further depth. We’ve explored the foundations of the development of these commercial LLMs, researched the AI ethics landscape (people and frameworks) and conducted our own internal strategic foresight workstream that has helped ensure we’re ready for numerous possible future scenarios so we can contribute to shaping an inclusive, safe AI future we believe should exist.

And long story short, it’s far from all rosy. The race for progress and the commercially-driven development of these models and systems have some BIG downsides filled with ethical conundrums.

The foundations of these commercial models, in particular, come at considerable cost and the future ramifications of AI technology needs to be considered carefully before choosing to engage with or adopt AI systems as the implications to your organisations, society and our environment are huge.

So, we’re sharing the top 10 reasons an organisation shouldn’t use AI today:

  1. Consolidation of power to a small few AI, and LLMs in particular, are extremely intensive endeavours. There are only a select number of organisations on the planet that have the resources, funding and skills to develop bespoke, specialist models like Chat GPT and Google’s BARD. And those that are leading the “race” include Google, Microsoft, IBM and Amazon and they are all commercially driven. This list is slowly growing, with some more responsibly driven LLMs, like Claude, but our choices on which tools and organisations to support matter.

  2. Worker exploitation It has been well documented that many of these LLMs have been built on worker exploitation. From Kenyan workers being psychologically exploited and paid less that £2 an hour to make ChatGPT less “toxic”, to questionable environments for gig workers like data labellers, delivery drivers and content moderators (or Ghost workers). Some AI systems, including many LLMs, have been developed with worker exploitation at the very heart.

  3. Algorithmic discrimination, coded bias and unreliable outputs

    The answers given by LLM tools, like ChatGPT and Google BARD, at first glance may seem unbiased but aren’t as neutral as many people might expect.

    Open AI’s CEO Sam Altman has admitted the shortcomings firsthand. These systems have been coded with human bias and discrimination. They produce outputs that are often factually incorrect (hallucinations), sexist, racist and in many cases in an outright offensive manner. These outputs have shocked ethical experts in this space and continue to reinforce dangerous negative bias and discrimination.

  4. Computational cost and environmental impact

    Every time LLMs are trained or used a whole system is put in motion which requires energy and resources.

    According to Stanford’s Artificial Intelligence Index, it took the equivalent of 502 tons of carbon dioxide emissions to train GPT-3 last year. That’s roughly the equivalent of driving a car around the earth’s equator more than 45 times or 1 576 272 miles.

    The water required to run data centres (predominantly for cooling) is huge. Recent estimates from a peer reviewed study has estimated that a single system “drinks” 500ml of water for every 20-50 questions.

    In the midst of a climate crisis, we need to carefully consider our resource use.

  5. Copyright infringement and liability risk Generative AI models like DALL-E and Midjourney give truly outstanding results at incredible speed; however these models rely on existing content, scraped from the internet in order  to generate new content. It’s unclear how copyright is applied in this scenario and how it might change from country to country. There have been numerous artists and businesses pursuing claims concerning copyright infringement. Open AI is looking to escape current and potential lawsuits by passing the responsibility to the end users meaning it’s those of us using the tools who would be assuming the risk.

  6. Lack of data transparency & privacy concerns The lack of transparency around data sourcing and the opaque development of the latest release of GPT-4 has led to big questions around privacy and ethics. OpenAI, in particular, has not released the data sources for GPT-4 citing safety concerns. The questions around datasets and privacy violations have led to Italy blocking the use of ChatGPT and data regulators in France, Ireland, Canada and Germany following their own routes of enquiry.

  7. Reputational risks The irresponsible use of AI, and failure to explore unintended consequences, can lead to serious damage to an organisation’s brand and reputation.  Fear that AI has been used to replace a person, privacy issues or dangerous outcomes could all lead to negative association with a brand. CNET - a digital publication - saw this happen to them earlier this year.  This potential erosion of trust is a vital consideration for non-profits and purpose-led organisations.

  8. Obscuring the real risks The outputs of LLMs can be perceived as sentient. It raises fears about future scenarios where Artificial General Intelligence (AGI) takes over. The fear of AGI obscures the actual, real harm today created by bias, discrimination and factually incorrect data these models share. As AI Snake Oil puts it, The Future of Life’s misleading open letter about sci-fi AI dangers ignores the real risks.

  9. Accountability & responsibility Beyond the copyright and IP infringements risk,  another very serious factor is determining who is responsible or accountable for the AI outputs of our LLM applications. This has led to a legal and moral grey area. There’s very little case-law here and as The Verge says, the scary truth about AI copyright is nobody knows what will happen next.

  10. Opportunity costs

    What resources are being used for LLM exploration and development that could be used for greater impact or good? This isn’t a new problem for this application but one that must be considered carefully from the very start. If you focus hard on LLMs what other areas of innovation are you ignoring?

Despite these concerns and factors, we believe that there are still huge opportunities to harness AI, including LLMs, for good with incredible potential impact for both society and our environment IF  we engage responsibly and ethically.

It’s important to acknowledge that it’s not a simple black and white decision. It’s crucial for purpose-led organisations to carefully consider these ethical conundrums, the use case, document decision making and most importantly have the right foundations in place from the start.

We’ve created an Ethical AI Framework, including a set of AI principles we’re currently testing. We’re starting small and experimenting responsibly in order to mitigate or eliminate the risks wherever possible and documenting, and communicating, our experiments transparently with you.

Over the coming weeks we’ll  be sharing more about our journey into unpacking the ethics of AI, sharing inspiring examples of AI for Good and where we believe you can leverage LLMs, responsibly, for most impact.

In the meantime, we’re here to help you understand the potential disruption to your organisation or for an informal chat about some of the challenges you face.

A placeholder image that is the color yellow" A placeholder image that is the color teal"

Get in touch about your project

It doesn't matter how early stage you are with your thinking we'd love to have a chat. Drop us an email or book something on Calendly.