LLM API Interaction
Translation in Progress
This page is being translated. Content below is a placeholder.
Overview
Before building an Agent, you need to understand how to communicate with LLM APIs.
This chapter uses the Gemini API as an example.
Basic Setup
Installation
bash
npm install @google/generative-aiInitialization
typescript
import { GoogleGenerativeAI } from '@google/generative-ai'
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!)
const model = genAI.getGenerativeModel({ model: 'gemini-2.5-flash' })Conversation Modes
Single-turn
typescript
const result = await model.generateContent('Hello!')
console.log(result.response.text())Multi-turn Chat
typescript
const chat = model.startChat()
const response1 = await chat.sendMessage('My name is Alice')
console.log(response1.response.text())
const response2 = await chat.sendMessage('What is my name?')
console.log(response2.response.text()) // "Your name is Alice"Configuring Tools
This is key for Agents - telling the LLM what tools are available:
typescript
const model = genAI.getGenerativeModel({
model: 'gemini-2.5-flash',
tools: [{
functionDeclarations: [
{
name: 'read_file',
description: 'Read file contents',
parameters: {
type: 'object',
properties: {
path: { type: 'string', description: 'File path' }
},
required: ['path']
}
}
]
}]
})Handling Tool Calls
typescript
const response = await chat.sendMessage('Read package.json')
// Check if LLM wants to call a tool
const functionCalls = response.response.functionCalls()
if (functionCalls?.length) {
for (const call of functionCalls) {
console.log(`Tool: ${call.name}`)
console.log(`Args: ${JSON.stringify(call.args)}`)
}
}Summary
- Use
@google/generative-aiSDK startChat()enables multi-turn conversation- Declare tools via
toolsconfig functionCalls()gets tool call requests
Next
Learn about streaming and System Prompts: Prompt & Streaming →