Creating the API Route

In Next.js, you can create custom request handlers for a given route using Route Handlers. Route Handlers are defined in a route.ts file and can export HTTP methods like GET, POST, PUT, PATCH, etc.
1

Create the File

Create a file at app/api/chat/route.ts.
2

Add the Code

Open the file and add the following code:
app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { convertToModelMessages, streamText, UIMessage } from "ai";

// Allow streaming responses up to 30 seconds
export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: openai("gpt-4o"),
    messages: convertToModelMessages(messages),
  });

  return result.toUIMessageStreamResponse();
}

How the API route works

1

Declare the POST Function

Export an asynchronous function called POST that handles incoming HTTP POST requests to this route.
2

Extract Request Data

Retrieve the messages from the request body using await request.json() to get the conversation history.
3

Call AI SDK

Pass the messages to the streamText function imported from the AI SDK, along with the specified model configuration.
4

Return Response

Return the model’s streaming response in UIMessageStreamResponse format, which enables real-time text streaming to the client.

Test the Basic Implementation

1

Start the Development Server

Run pnpm run dev and navigate to http://localhost:3000
2

Send a Test Message

Head back to the browser and try to send a message again. You should see a response from the model streamed directly in!
While you now have a working agent, it isn’t doing anything special. We need to add system instructions to refine and restrict the model’s behavior.

Adding System Prompts

Let’s add system instructions to refine and restrict the model’s behavior. In this case, you want the model to only use information it has retrieved to generate responses. Update your route handler with the following code:
app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { convertToModelMessages, streamText, UIMessage } from "ai";

// Allow streaming responses up to 30 seconds
export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: openai("gpt-4o"),
    system: `You are a helpful assistant. Check your knowledge base before answering any questions.
    Only respond to questions using information from tool calls.
    if no relevant information is found in the tool calls, respond, "Sorry, I don't know."`,
    messages: convertToModelMessages(messages),
  });

  return result.toUIMessageStreamResponse();
}
In its current form, your agent is now, well, useless. The model will respond with “Sorry, I don’t know” to any question because it doesn’t have access to any knowledge base or tools yet.

Test the System Prompt

1

Ask a Question

Head back to the browser and try to ask the model what your favorite food is. The model should now respond exactly as you instructed above (“Sorry, I don’t know”) given it doesn’t have any relevant information.
2

Test Different Queries

Try various questions to verify the system prompt is working correctly.
The system prompt ensures the AI only responds using information from tool calls, which we haven’t implemented yet. This prepares the foundation for RAG functionality.

Understanding the Implementation

Request Handling

Message Parsing: Extracts messages from the request body

Streaming Response

Real-time Output: Returns streaming response to the client

Model Configuration

OpenAI Integration: Uses gpt-4o model for chat responses

System Prompt

Behavior Control: Restricts AI to only use tool-based information

Extension tasks