Profile avatarPavel Svitek
Source code

Generate text with Vercel's AI SDK

Running the program every time to get an LLM answer isn't too fun or realistic. Let's create a question-answer loop. This will allow us to have a conversation with the LLM model.

1

We will create a new function startInteractiveChat that will be responsible for the loop. We will use the standard readline library which is part of NodeJS standard SDK.

Within the function, we will create embedded function askQuestion which will be called recursively. The user can exit the loop by typing in "exit".

2

But wait, the LLM does't remember anything about us - because we don't send the conversation history! Let's fix that. We will add 2nd function parameter - history: ModelMessage[].

3

Let's look back to see how we need to change implementation in startInteractiveChat function.

4

Let's look back to see how we need to change implementation in startInteractiveChat function.

1
async function startInteractiveChat() {
2
const rl = readline.createInterface({
3
input: process.stdin,
4
output: process.stdout,
5
})
6
7
console.log("Interactive chat started. Type 'exit' to quit.")
8
9
const askQuestion = () => {
10
rl.question('You: ', async (input) => {
11
if (input.toLowerCase() === 'exit') {
12
rl.close()
13
return
14
}
15
16
try {
17
const response = await generateAnswer(input)
18
console.log('AI: ' + response + '\n')
19
askQuestion()
20
} catch (error) {
21
console.error('Error:', String(error))
22
askQuestion()
23
}
24
})
25
}
26
27
askQuestion()
28
29
rl.on('close', () => process.exit(0))
30
}

This implementation became a bit longer, hence, it will be best if you have a look at the complete example in Github.