4 steps nonprofits, charities & purpose-led organisations can take today to ensure their organisations are ready for AI
Innovation team
Our last articles shared the 10 reasons why purpose-led organisations shouldn’t engage with LLMs. Deliberately provocative? Possibly, but after the time we’ve invested in exploring the future of AI, the risks today and the unintended consequences we felt it necessary to openly communicate the costs of some commercial LLMs.
Should that deter us from taking action? Absolutely not.
We’re in the midst of disruption and speed is a crucial component. BUT the fear of being left behind shouldn’t drive us to make ill informed decisions in the short term. We need to ensure we’re building the right foundations to build from.
Here’s 4 steps we’ve taken and, believe, your organisation will value from:
-
Explore Tomorrow to inform action today
Torchbox has explored a number of speculative futures and what that might mean for our organisation, the sector and beyond. We’ve done this to ensure we’re resilient to change, successfully navigating the uncertainty it presents in order to unlock new impact opportunities and ensure we’re contributing to a future we want to exist.
We arrived at a number of possible future AI scenarios. Some with, predominantly, positive outcomes but some that could easily be mistaken for an episode of “Black mirror”. With a few somewhere in between. All Highly disruptive, and they’re all possible.
Seeing both the light and dark helped us anticipate what might be next, ensure we’re getting ready today and scan the consequences of AI systems being deployed at scale.
Our organisational AI strategy is shaped around the needs of our people and customers. Understanding what possible Futures there might be allows us to consider what action we should take today.
These big questions1 have been far from prohibitive and the results are providing us with the springboard to now move at speed, responsibly and minimising risk.
-
Experiment, responsibly.
Experimentation is crucial for Innovation. It’s central to validated learning, risk reduction and is essential for progress. It’s at the heart of what we do. Naturally as an Innovation team, huge advocates for the value and importance of experimentation.
That being said, we’re at an important inflection point, huge technological disruption.
This isn’t just about us playing with a new release of the Microsoft or Google Suite or validating the desirability of a new product. Think, the evolution of the internet. We’re - collectively - helping shape that with our decisions today. The risks and costs of AI systems are very real.
Experimentation should never be used as a get out of jail free card. We need to ensure we have the room to learn, but do so safely.
Should that drive inaction? No.
Should that mean we don’t experiment? Definitely not.AI and LLMS advancements in particular are new to many. It means we should experiment more but responsibly in equal measures. We should be asking critical questions2 about our potential impact and governance.
We love the statement we heard recently on a course at CISL from Dr Niki Wallace from UAL, “Is is good enough for now, is it safe enough to try?”
-
The Do’s and Don’ts
Taking the time to properly build your stance on where and when AI systems or LLMs will and won’t be used is crucial. Creating a framework, guiding principles, standards or something as simple list of Do’s and Don’t will really help guide you through the weeks and months ahead. You’ll likely iterate on these as new information and implications emerge (we certainly have) and it’s crucial that this is created inclusively from a diverse range of perspectives.
Having a strong foundational stance3 is crucial and should be informed by your organisation’s purpose, desired future and in service of building a future where AI is used for good.
-
Focus on people in your organisation’s AI strategy AI is disruption. Disruption means change.
Our organisations, for now (and hopefully well into the future), are made up and run by people.
Most people, however, inherently resist change. We’re wired to seek stability and predictability after all. When faced with uncertainty, the brain activates the amygdala, and triggers a fight or flight response leading to stress and anxiety. This makes rational and logical decision making extremely difficult. Pair this with previous negative experiences, the numerous downsides of AI and the potential fear of losing your job to AI. You’ve likely got a strong cocktail of resistance.
Sounds fun right? OK, maybe only for the few with a high tolerance.
We need to ensure we’re considering these human factors when creating and deploying an AI organisational strategy. Yes, focus on when you want to use AI and how you want to use it but not without putting the focus on the people who push our organisation forward, their concerns, trepidations and the emotional journey they are going through. Here’s a few questions4 to get you started.
We need the people at the very heart of shaping the future of AI, and that’s your people. Talk to them, include them and build a strategy that’s inclusive and overcoming the barriers and blockers that exist
So to recap, the 4 steps you might take today are
- Look ahead, envision your desired future with the intention of taking action today
- Ensure you’re experimenting, but doing so responsibly
- Create a clear stance to help guide your org
- Create a people-centred Organisational AI strategy
Get in touch if you’d like to learn more.
Footnotes
-
What futures might exist for your organisation? What future would you prefer to exist? What is in, and out of, your control? Are there people and organisations you can partner with to extend your reach? What action or steps are you taking to help shape that future? Do the decisions you’re taking today reflect that? What might be the blockers to that future existing and how might you overcome them? ↩
-
Who are we experimenting with and what are the intended and unintended consequences? For our planet and all that live on it? What would happen if it was used incorrectly? Or if there was a bad actor using the tools? How are we mitigating or eliminating these risks? What guardrails are in place to ensure your experiments don’t have safety risks? Is it good enough for now? Is it safe enough to try? ↩
-
What does your organisation believe in? What do you and don’t you want to contribute to? Where are the hardlines? What is blurry? When do the benefits outweigh the costs? What use cases are most impactful for your organisation? How will you communicate and govern these? ↩
-
How will AI be explored or adopted in your organisation? What use cases create most value? Which tools and LLMs will you adopt? Who is responsible and how will it be governed? How will you start small, learn and then scale what works? Who are the pioneers that will spearhead the change? How are you connecting people and sharing best practice? How will you encourage and incentivise the adoption of AI systems? How will you identify and overcome resistance to change? Bad Actors? ↩