What we're working on

What we're working on

Director of Innovation

Tools aren’t neutral. They shape ways of working and social relationships. We’re deliberately being cautious in how we experiment with these new Large Language Models. Part of that is being transparent about what we are and aren’t using them for in these early stages. Here’s a three minute read on what we’re up to.

Ethics

Amongst all our other experimentation this has been our key focus. We’ve a small, three-person, team looking at this. There are lots of solid arguments for why LLMs should be used by a purpose-led organisation. We’re stepping through those to understand if we can responsibly use them and responsibly recommend them to our customers. The fundamental is that if the ethics aren’t right we’ll stop exploring. Our aim is to have an ethics framework, and clear position, ready next week.

Collaborating with LLMs

Whilst exploring the ethics we have started to understand how we can best collaborate with LLMs. Assuming the hypothesis that LLMs improve the quality, and increase the speed that work can be delivered is true, our rationale here is it would be unethical to not use that benefit to increase the impact our customers can make. We’re aware there’s some awkward circular reasoning here.

That said, we’re working on toy projects. This website is one of them. There’s a couple of next.js apps that have been built. Up to this point we’ve been trying to improve the quality of response from the machine by understanding the specificity required with prompts. From next week we’ll be forcing a weekly cadence on ourselves to create new projects publicly. The first one will be an app about how to improve prompt quality.

Integrating with LLMs

Moving a step closer to LLMs we’ve also started working on projects that integrate directly with them. Specifically Conversations with your Content and Wagtail AI both integrate with GPT3.5’s API. To try and create value quickly and understand the possible risks we’ve started working with one of our clients to explore the space and get feedback from their team on an internal project. It is interesting iterating with vectors and text embedding to best pass context through to GPT3.5.

Understanding how human-machine interaction is changing

As a quick primer: HMI has been reasonably steady as a discipline since Steve Jobs popularised the Graphical User Interface with the Lisa. We’ve had arguments over skeuomorphism and slowly moved away to more abstraction but most digital interactions have followed their physical peers. We press buttons, toggle switches, open folders and - mostly - have hierarchical data structures. There have been challenges to this over the years, smart speakers opened up a Voice User Interface layer, whilst Golden Krishna challenged the need for visual user interfaces.

Large Language Models are the latest challengers to how we interact with machines. Even Bill Gates has been chatting on this. Concretely we’ve been considering what user interfaces look like if the “interface” is simply a conversation. There’s a lot to unpack with this because it moves machine interaction far closer to human conversation where turn-taking and geeky things, like turn construction units, become necessary elements to think about when creating digital experiences.

Upskilling internally

Internally we’re using an emergent system based on using early adopters to model behaviours and share learnings. We’re deliberately moving slowly. There are two reasons for that. First: it ensures no-one is excluded because of lack of knowledge and we can distribute learnings cross-functionally and across disciplines. Second: it allows information discovered by early adopters to be fed back into our ethics work to make them more resilient.

A placeholder image that is the color yellow" A placeholder image that is the color teal"

Get in touch about your project

It doesn't matter how early stage you are with your thinking we'd love to have a chat. Drop us an email or book something on Calendly.