Adding message history with Vercel's AI SDK
Until now, we have sent a single prompt into the LLM model. However, in most cases, we want to send the whole conversation history into the LLM model - because, if the LLM has the whole conversation context, it will be able to respond with a better answer.
What is a message history? A message history is a collection of all the messages between the user and the AI.
It's gonna become more clear after looking at the ModelMessage[] type from AI SDK 6.
ModelMessage type
In AI SDK 6, use the ModelMessage type from the ai package (the older CoreMessage type was removed). For simple
text messages, each item looks like this:
As you can see here, a text ModelMessage is a simple object with role and content properties. The role property
is either user, assistant or system. The content property is a string that contains the message content.
- Role
systemis reserved for the system prompt. We can replace it with our own system prompt which we used in previous lesson. - Role
useris reserved for the user's message. It represents messages written by the user. - Role
assistantis reserved for the AI's message. It represents messages written by the LLM model.
Now that we understand the ModelMessage type, let's look how we can use it in our code.
Let's replace the system prompt from previous lesson. Here's the original code
Let's replace the system message and prompt with ModelMessage[] array
But what about the user prompt? Let's add it at well. Note, that we removed
Let's put everything together, including the model definition and the function call.
1import { generateText } from "ai";23const systemPrompt = `Respond as a pirate.`;45export async function generateAnswer(prompt: string) {6const { text } = await generateText({7model,8prompt,9system: systemPrompt,10});1112return text;13}
In this lesson, we've learned how to use Vercel's AI SDK ModelMessage[] type to send the whole conversation history
into the LLM model. This is a very useful feature - it allows the model to have a context of the entire conversation,
allowing it to generate more accurate responses.