LLM cheatsheet

A 3 minute cheatsheet about the world of Large Language Models. We built this for ourselves here at Torchbox, despite being a group of people deep in the weeds of technology we were still finding it hard to keep up. It errs towards ease of understanding over perfect accuracy.

Illustration of the Konami code

Why does any of this matter?

In March 2023 OpenAI managed to achieve a Wright brother’s moment when they released GPT4. The benchmarks exhibited ‘human level performance’ that no-one would have predicted even a few months ago. The pace of progress is moving faster than any other digital innovation we’ve previously seen. That progress is being driven by Large Language Models.

What are Large Language Models (LLMs)?

LLMs were created from machine learning research. Google was the first to the party in 2018 with BERT. They are ‘stochastic parrots’ according to the now famous Emily Bender. They create value by predicting what the next word will be given the series of words that precede it. They’re able to do this because of the quantity of data they have been trained on.

What can LLMs do?

We don’t know the full capabilities of LLMs. Currently they’re mostly being used to generate or analyse content. The first people to see the potential for LLMs were marketeers who saw the benefit of generating lots of content but the disruption won’t stop there. A paper published by OpenAI argued 80% of the workforce will have some impact from LLMs whilst 15% of all work tasks could potentially be completed to a higher standard, and more quickly, with LLMs.

Why are LLMs incorrect?

There are a few reasons for this. The more data the more chance they have of mimicking human falsehoods - we humans are unfortunately frequently incorrect. The design of LLMs is to mimic the type of response a human might make, this is why they’re liable to make up titles for academic papers or articles that are entirely fictional but sound plausible. Additionally LLMs are limited to the data they’ve been trained on, if data is outdated or missing the wrong answer will be returned.

What’s a GPT and why are there different versions?

GPT stands for Generative Pre-trained Transformer. GPT-4 is the most powerful and is only accessible with a subscription or in a very limited way via Bing. GPT3.5 - often named ChatGPT - is the most popular and was released in November 2022. GPT3 is now far less used because it’s expensive and gives much lower quality results. Earlier versions of GPT aren’t easily accessible.

A Bard riding a LLaMa with an Alpaca?

There are a bunch of other models. LLaMa is an open-source model Meta accidentally released in the wild via 4Chan. Whilst Alpaca is a Stanford research project that builds on LLaMa. To add to the diversity of LLM-based products Google has BARD, which isn’t really being used by anyone.

Aren’t there a bunch of things that can create images?

There are. Midjourney, Dall-E and Stable Diffusion are the best known. They had big hype in the summer of 2022 but they haven’t quite had the breakthrough that GPT-4 has demonstrated. It wouldn’t surprise us if that changes in the near future though.

A placeholder image that is the color yellow" A placeholder image that is the color teal"

Get in touch about your project

It doesn't matter how early stage you are with your thinking we'd love to have a chat. Drop us an email or book something on Calendly.