Profile avatarPavel Svitek
Source code

Adding message history with Vercel's AI SDK

Until now, we have sent a single prompt into the LLM model. However, in most cases, we want to send the whole conversation history into the LLM model - because, if the LLM has the whole conversation context, it will be able to respond with a better answer.

What is a message history? A message history is a collection of all the messages between the user and the AI.

It's gonna become more clear after looking at the ModelMessage[] type from AI SDK 6.

ModelMessage type

In AI SDK 6, use the ModelMessage type from the ai package (the older CoreMessage type was removed). For simple text messages, each item looks like this:

As you can see here, a text ModelMessage is a simple object with role and content properties. The role property is either user, assistant or system. The content property is a string that contains the message content.

  1. Role system is reserved for the system prompt. We can replace it with our own system prompt which we used in previous lesson.
  2. Role user is reserved for the user's message. It represents messages written by the user.
  3. Role assistant is reserved for the AI's message. It represents messages written by the LLM model.

Now that we understand the ModelMessage type, let's look how we can use it in our code.

1

Let's replace the system prompt from previous lesson. Here's the original code

2

Let's replace the system message and prompt with ModelMessage[] array

3

But what about the user prompt? Let's add it at well. Note, that we removed

4

Let's put everything together, including the model definition and the function call.

1
import { generateText } from "ai";
2
3
const systemPrompt = `Respond as a pirate.`;
4
5
export async function generateAnswer(prompt: string) {
6
const { text } = await generateText({
7
model,
8
prompt,
9
system: systemPrompt,
10
});
11
12
return text;
13
}

In this lesson, we've learned how to use Vercel's AI SDK ModelMessage[] type to send the whole conversation history into the LLM model. This is a very useful feature - it allows the model to have a context of the entire conversation, allowing it to generate more accurate responses.