Generate text with Vercel's AI SDK
Running the program every time to get an LLM answer isn't too fun or realistic. Let's create a question-answer loop. This will allow us to have a conversation with the LLM model.
We will create a new function startInteractiveChat that will be responsible for the loop. We will use the standard
readline library which is part of NodeJS standard SDK.
Within the function, we will create embedded function askQuestion which will be called recursively. The user can exit
the loop by typing in "exit".
But wait, the LLM does't remember anything about us - because we don't send the conversation history! Let's fix that. We
will add 2nd function parameter - history: ModelMessage[].
Let's look back to see how we need to change implementation in startInteractiveChat function.
Let's look back to see how we need to change implementation in startInteractiveChat function.
1async function startInteractiveChat() {2const rl = readline.createInterface({3input: process.stdin,4output: process.stdout,5})67console.log("Interactive chat started. Type 'exit' to quit.")89const askQuestion = () => {10rl.question('You: ', async (input) => {11if (input.toLowerCase() === 'exit') {12rl.close()13return14}1516try {17const response = await generateAnswer(input)18console.log('AI: ' + response + '\n')19askQuestion()20} catch (error) {21console.error('Error:', String(error))22askQuestion()23}24})25}2627askQuestion()2829rl.on('close', () => process.exit(0))30}
This implementation became a bit longer, hence, it will be best if you have a look at the complete example in Github.