Building a responsible culture of AI Innovation

Building a responsible culture of AI Innovation

Senior Innovation Designer

In this post, Andy speaks with Ismael Kherroubi García about how nonprofits can build and nurture a responsible culture of AI Innovation.

Ismael is the founder and CEO of Kairoi, the AI Ethics & Research Governance Consultancy. He advises on accurate communications strategies, technical standards, public engagement, and robust governance practices

Thank you very much for taking the time to answer some of our questions, Ismael! Can you tell us a bit about yourself and your background please?

 

“Thanks for having me! I’m the founder and CEO of Kairoi, the AI Ethics & Research Governance consultancy. My career has been something of a roller coaster – I’ve worked in fintech at Bloomberg, as well as HR in the arts and academia.

In 2020/21, I managed and overhauled the Alan Turing Institute’s research ethics committee, which inspired a report I recently launched with the Ada Lovelace Institute.

Since early 2022, I’ve joined Genomics England’s Ethics Advisory Committee, worked on data ethics workshops with the University of Bristol, completed a master’s in Philosophy of the Social Sciences at the London School of Economics, and worked on an online course on open science at NASA. This was an exciting project where I got to explore how to convey the cultural change needed for responsible research practices.

Since establishing Kairoi last August, I’ve worked with organisations such as We and AI, NHS Health Research Authority, and Mozilla.”

That is a really interesting journey. Your experience would be hugely valuable for nonprofits and charities. Would you tell us a little bit more about what a responsible culture of AI Innovation means to you?

 

“For me, a responsible culture of AI innovation involves acknowledging AI as a powerful range of novel technologies that, nevertheless, come with significant limitations. In a nutshell, a responsible AI innovation culture is about understanding what AI tools are and can effectively do.

Such a culture can be embraced by any organisation. I like to speak of innovators, buyers and end users. Many nonprofits – such as universities – are innovators; developing cutting-edge technologies and research methods. Meanwhile, most charities will have some element of AI in their IT systems – they are buyers. Finally, end users refers to individuals – whether we’re using AI tools to find the quickest route somewhere, accessing medical services, seeking employment, or something else.

Whilst each of these stakeholders will have different amounts of influence on advancements in AI, establishing a responsible AI innovation culture will involve developing nuanced notions of AI and including diverse stakeholders to achieve this. After all, AI tools are made by and for people!”

You mentioned diverse stakeholders’ roles being crucial for building a responsible culture of AI Innovation, and we completely agree. What does that actually look like in practice?

 

“Innovating in AI often means understanding the target area where a tool is applied, and that requires engaging with experts across disciplines, as well as its diverse users. In short, responsible AI innovation cultures require inclusion. In practice, this requires stakeholder mapping; understanding who is impacted by AI tools and what responsibilities their creators have.

At Kairoi, we advise to begin with an internal stakeholder mapping exercise, such that we can see what different inputs can be made to an organisation’s innovation culture.”

Can you explain a bit more about how mapping internal stakeholders might help encourage responsible innovation?

 

“Recall the idea of end users and consider that all staff are ultimately end users.The recent rise of generative AI tools has meant that manipulating AI tools is no longer something only engineers and data scientists do. With this, colleagues beyond Technology or R&D divisions are becoming aware of the capabilities and limitations of such tools.

Whether in HR, marketing or finance, innovating in AI can be facilitated through cross functional conversations. What’s more, the existence of relevant legislation (e.g.: UK GDPR) and the arrival of more targeted legislation (e.g.: the EU’s AI Act) means that coworkers from governance or legal departments may have particular legal perspectives that we may otherwise miss if we work in silos.

By engaging with “non-AI experts” R&D departments can break out of echo chambers and gain new insights about common experiences with AI. By breaking down barriers between departments at tech companies, we foster more responsible tech environments. After all, the company shows that they value different perspectives, and those who may not usually have a say in the tech their company builds can feel closer to their shared mission and vision.

You mentioned non-experts. Are there more technical experts that should be involved?

 

“Unless we are conducting theoretical research into computational capabilities, it is very likely that an AI tool is being developed for a particular context, such as biology, economics, logistics and so on. It is key that experts in the areas of application are also involved in the design of such AI tools.

There has been a lot of discussion on fundraising and the role of AI-powered chatbots. What might a responsible, inclusive approach to Innovation look like here?

 

“Let’s imagine that the solution proposed is an AI-powered chatbot appearing on the website of a fundraising campaign. Much like a customer service chatbot we find on so many websites, this one answers basic questions about the campaign and navigating the website, and even allows you to sign up for further email updates if you want.

In the process of developing such a tool, it is key that experts in fundraising, as well as experts in user experience (UX), be involved. Ultimately, we want the tool to help generate leads and keep people interested in engaging with the campaign. Fundraisers can help ensure the tone of the chatbot’s outputs are appropriate, and that it prompts relevant areas of discussion. Meanwhile, UX designers can advise on the chatbot’s interface, as well as how it integrates with the website more generally and ensures a seamless user experience.”

AI can be a complex thing to grasp, even for some experts. What should the role of the people we’re designing for be?

 

“User research is not a new area for innovators, but the significant social consequences of AI innovations require special attention to public engagement.

There are at least two reasons why including the public is important. On the one hand, it is crucial that the general public trust the AI tools we develop, so that the tools are put to best use. Whilst some companies have sought to produce a sense of panic around developments in AI, we must ensure that the public are reassured and well-informed; something that is made possible when they are involved throughout the AI lifecycle.

On the other hand, we must consider the experiences of the end users when developing novel technologies. Asking the public about the value they find in suggested innovations, about potential use cases, and about the challenges the technology itself may entail will all lead to invaluable insights for the success of your latest products and services.”

What might be some of the challenges nonprofits might face when involving the public?

 

“At least three challenges are those related to approach, education and diversity.

On approach, a lot of ink has been spilled over “public participation,” and it will be for innovators to figure out whether they want to engage with the public more or less meaningfully. For example, the “ladder of citizen participation” describes a spectrum from manipulating people (so there is a negative angle to this too), to engaging with them as partners and even empowering them with control over decisions.

The challenge around education is about the knowledge the public engaged with already have. The question is about the degree they are confident with critiquing some AI innovation. To this effect, the innovator engaging with them may need to prepare materials on what their particular innovations involve, as well as an introduction to AI. It’s important that such a package be accurate – you are engaging with the public to figure out the best use cases and potential risks, not to sell your tech!

Last but certainly not least, diversity relates with the need to include voices from different backgrounds. The AI space generally has a diversity problem, and it is easy to mistakenly involve members of the public who are from similar backgrounds – generally white cis males with university education from Western countries. Involving different voices and encouraging people from marginalised backgrounds will provide a much more nuanced perspective on our innovations. The first step to achieve this is to look at our own staff’s diversity and, yes, inclusion.”

Thank you very much for taking the time to chat to us Ismael! And finally, what does Kairoi mean and where can people find the best resources?

 

“I always love answering the first question! Kairoi (pronounced /kye-roy/) is the plural of the Ancient Greek word for “opportune time”, kairos. It is about taking advantage of moments when decisions are critical. You can find more about the thinking behind the name and brand on our blog.

On resources, we aim to develop tools openly – we use GitHub for this, and currently have two freely available under CC-BY.

Responsible AI Interview Questions are for ensuring that job candidates and our workforces are aware of – or interested in learning about – the potential and limitations of AI technologies.

Our Template ChatGPT Use Policy contains guidelines for staff to get the most of the tool whilst ensuring safe and responsible usage – our blog explains why such policies are necessary, and I am happy to share the template has been adapted so far to Spanish, French and Belgian contexts! And bitesize presentations about about trends, practices and policy concerning AI can be found there too.

To be the first to know about new resources, follow Kairoi on LinkedIn. Of course, all innovators, buyers and end users are very welcome to get in touch for advice on responsible Innovation in AI. You can reach out via email.”

 

If you’re interested in learning more about some of the practical tools we use at Torchbox you can read a little more in a previous blog post here.

A placeholder image that is the color yellow" A placeholder image that is the color teal"

Get in touch about your project

It doesn't matter how early stage you are with your thinking we'd love to have a chat. Drop us an email or book something on Calendly.