Add a chatbot to your app — the right way, without bankrupting yourself on day one.
Don't worry — you won't write any AI chat code by hand. Your AI tool handles all the technical parts. This guide helps you understand the concepts so you can describe what you want.
Every AI chat — ChatGPT, Claude, your custom support bot — has the same four pieces. Once you understand them, building one is just gluing them together.
The UI
A list of messages on the page and an input box at the bottom. Just HTML and CSS — the easy part.
The AI provider
OpenAI, Anthropic, or another LLM vendor. You send messages, they send back answers.
The API route
Your own backend endpoint that takes the user's message and forwards it to the AI provider, then streams the response back.
Conversation history
Past messages saved to your database so users can scroll back and continue conversations later.
The easiest way to build a chat in Next.js is the Vercel AI SDK. It handles streaming, message state, and switching between providers — all the annoying parts.
Why it's the default choice
ADD AI CHAT WITH VERCEL AI SDK
"Add an AI chatbot to my app at /chat, powered by Anthropic's Claude. The chat should: (1) show the reply word-by-word as it's written, (2) show a typing indicator while waiting, (3) automatically scroll to the newest message, (4) show a friendly message if something goes wrong. Keep my Claude API key in a safe place so it's never visible to visitors."
Out of the box, the AI is a generalist. To make it useful for your specific app, you give it a system prompt — a hidden instruction that shapes every response.
Example: customer support bot
You are a helpful customer support agent for Acme Cloud, a SaaS product for project management. Rules: - Only answer questions about Acme Cloud - If asked about anything else, politely redirect - Be friendly but concise (2-3 sentences max) - Never make up features that don't exist - If you don't know, say "Let me get a human" and offer to email support@acme.com
The user never sees this. They just notice the bot "knows" about your product and stays on-topic. That's the whole secret.
By default, the chat resets when the user reloads the page. That's usually not what you want. Save conversations to your database so users can come back later.
Schema: two simple tables
conversations — id, user_id, title, created_at
messages — id, conversation_id, role (user/assistant), content, created_at
ADD CHAT HISTORY
"Add conversation history to my AI chat. Create conversations and messages tables in Supabase. When a user starts a new chat, INSERT a conversation row. After every message exchange, INSERT both the user message and the AI response into the messages table. Add a sidebar showing the user's past conversations (with auto-generated titles based on the first message). Clicking one loads its messages into the chat."
AI APIs are pay-per-use. One unhinged user (or a bot) can rack up $100/day. Three controls every chat needs:
Cap response length
Set max_tokens on every API call. 500 tokens = ~375 words. Plenty for chat. Anything longer is probably a runaway loop.
Rate-limit per user
Use Upstash Redis or your database. Track requests per IP or user_id. Block after N messages per hour.
Set a hard monthly cap
OpenAI and Anthropic both let you set a billing limit in their dashboard. Set it to whatever you can afford to lose.
ADD RATE LIMITS TO MY CHAT
"Add a limit to my AI chat so each visitor can only send 20 messages per hour. Identify them by their account if they're signed in, or by their device otherwise. When someone hits the limit, show a friendly red message in the chat that says: "You've hit your hourly message limit. Try again in X minutes.""
API key showing up in the browser
You called the AI provider directly from a client component. Always go through your own /api/chat route on the server, never expose the key.
Chat works locally but breaks in production
Forgot to add ANTHROPIC_API_KEY (or OPENAI_API_KEY) to Vercel env vars. Settings → Environment Variables.
Streaming feels choppy or stalls
Make sure your API route returns a proper streaming response. The Vercel AI SDK handles this — don't buffer the response before sending it.
AI "forgets" previous messages mid-conversation
You're only sending the latest message. Send the entire message history with each request so the AI has context.
Your SaaS is growing and the same support questions come in over and over. Build an in-app AI chatbot that's been told everything about your product (via the system prompt). Users can ask questions in plain English and get instant answers, freeing your support team for the hard cases.
Build this with AI
"Build an AI support chatbot for my SaaS at /support. Use the Vercel AI SDK with Claude. The system prompt should tell the AI it's a support agent for [your product name], list your main features, link to your docs, and instruct it to escalate to support@example.com for billing or account issues. Stream responses, save conversations to Supabase, rate-limit to 30 messages/hour per user, and show a typing indicator while waiting."
ADD CONTEXT FROM YOUR DATABASE
"In my AI chat, before sending the user's message to the AI, fetch relevant context from my database (e.g., the user's recent orders, their saved notes, or product info matching their query). Pass that context as part of the system prompt so the AI can give personalized answers."
STREAM A SUMMARY
"Add a "Summarize" button on my long-form content pages (blog posts, transcripts). When clicked, it should stream an AI-generated 3-bullet summary into a sidebar without leaving the page. Use the Vercel AI SDK's streamText function."
VOICE-TO-TEXT INPUT
"Add a microphone button to my chat input that records the user's voice and transcribes it (using Whisper or the Web Speech API). When they stop recording, the transcribed text appears in the input box and they can edit it before sending."