Build a context driven chat bot with Netlify Blob and OpenAI

by Domitrius Clark

As AI becomes more integrated into our lives, especially as developers, it’s easy to get swept up in every new model, product, and feature launch. The goal of this guide is to help start your journey into understanding how these tools work and how to begin building the foundations of your knowledge with a simple chatbot.

We’ll build this bot using Netlify’s primitives, Astro + React for UI, and OpenAI’s API to power the bot.

Want to jump straight to the code?

Deploy a complete working example by clicking the button below:

Deploy to Netlify

#Prerequisites

  • Node.js v18+
  • Install Netlify CLI
  • OpenAI account & API key

#Create a new Astro project

In the directory of your choice, create a new Astro project using the CLI and select the default options through the CLI wizard:

Terminal window
npm create astro@latest

With the project created, we’ll install the adapters we’ll need through Astro’s CLI:

Terminal window
npx astro add react netlify tailwind

Astro CLI

Here we’ve used the Astro CLI to add adapters for React, Netlify, and Tailwind CSS. This streamlines editing the config, installing the dependencies, and setting up these integrations.

Learn more about adapters in the Astro docs - https://docs.astro.build/en/reference/adapter-reference/#allow-installation-via-astro-add

Our last step here is installing a few more dependencies we’ll need:

Terminal window
npm install @netlify/functions openai

#Build the Chat UI

Before getting too deep into the features of the chat app, let’s build a working UI. Create a Chat.tsx file and fill it in with the following:

/src/components/Chat.tsx
import { useState, useRef, useEffect } from "react";
// Create a type to help differentiate the source of a message and its content
interface Message {
role: "user" | "assistant";
content: string;
}
export default function Chat() {
// We're storing our messages in an array in state so we can deliver structured conversation to Blob storage
const [messages, setMessages] = useState<Message[]>([]);
const [input, setInput] = useState("");
const [isLoading, setIsLoading] = useState(false);
const [hasContext, setHasContext] = useState(false);
// These refs allow us to manage state for the input & scroll view
const messagesEndRef = useRef<HTMLDivElement>(null);
const inputRef = useRef<HTMLInputElement>(null);
function scrollToBottom() {
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
}
function handleInputChange(e: React.ChangeEvent<HTMLInputElement>) {
setInput(e.target.value);
}
// Send a boolean that will clear the conversation's history
async function startNewConversation() {
try {
await fetch("/.netlify/functions/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ newConversation: true }),
});
setMessages([]);
setHasContext(false);
} catch (error) {
console.error("Error starting new conversation:", error);
}
}
/**
* Processes a streamed response from our API, updating the messages state
* incrementally as chunks of the response arrive. Creates an empty assistant
* message first, then updates it with incoming content until the stream ends.
*/
async function processStreamedResponse(reader: ReadableStreamDefaultReader<Uint8Array>) {
let assistantMessage = "";
setMessages((prev) => [...prev, { role: "assistant", content: "" }]);
while (true) {
const { done, value } = await reader.read();
if (done) break;
const text = new TextDecoder().decode(value);
assistantMessage += text;
setMessages((prev) => [
...prev.slice(0, -1),
{ role: "assistant", content: assistantMessage },
]);
}
}
// Submit user messages and invoke `processStreamedResponse` to handle our bot's returned message as it streams
async function handleSubmit(e: React.FormEvent) {
e.preventDefault();
if (!input.trim() || isLoading) return;
const userMessage = { role: "user" as const, content: input.trim() };
setMessages((prev) => [...prev, userMessage]);
setInput("");
setIsLoading(true);
try {
const response = await fetch("/.netlify/functions/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ message: userMessage.content }),
});
if (!response.ok) throw new Error("Network response was not ok");
const reader = response.body?.getReader();
if (!reader) throw new Error("No reader available");
await processStreamedResponse(reader);
setHasContext(true);
} catch (error) {
console.error("Error:", error);
setMessages((prev) => [
...prev,
{
role: "assistant",
content: "Sorry, there was an error processing your request.",
},
]);
} finally {
setIsLoading(false);
}
}
function renderMessage(message: Message, index: number) {
return (
<div
key={index}
className={`mb-4 p-3 rounded-lg max-w-[80%] ${
message.role === "user"
? "ml-auto bg-blue-600 text-white"
: "mr-auto bg-gray-100 text-gray-800"
}`}>
<strong>{message.role === "user" ? "You: " : "AI: "}</strong>
<span>{message.content}</span>
</div>
);
}
// Separate our effects to avoid unnecessary re-renders
useEffect(() => {
scrollToBottom();
}, [messages]);
useEffect(() => {
if (!isLoading) {
inputRef.current?.focus();
}
}, [isLoading]);
return (
<div className="flex flex-col h-[600px] border border-gray-200 rounded-lg bg-white">
<div className="flex justify-between items-center p-4 border-b border-gray-200">
<span className="text-sm text-gray-500">
{hasContext ? "Conversation context: On" : "New conversation"}
</span>
<button
onClick={startNewConversation}
className="px-3 py-1 text-sm text-gray-600 hover:text-gray-800 border border-gray-300 rounded"
disabled={isLoading}>
New Conversation
</button>
</div>
<div className="flex-1 overflow-y-auto p-4">
{messages.map(renderMessage)}
<div ref={messagesEndRef} />
</div>
<form onSubmit={handleSubmit} className="flex p-4 border-t border-gray-200 gap-2">
<input
ref={inputRef}
type="text"
value={input}
onChange={handleInputChange}
placeholder="Type your message..."
className="flex-1 p-2 border border-gray-200 rounded text-base disabled:bg-gray-50 disabled:cursor-not-allowed"
disabled={isLoading}
/>
<button
type="submit"
disabled={isLoading}
className="px-4 py-2 bg-blue-600 text-white rounded cursor-pointer text-base disabled:bg-blue-400 disabled:cursor-not-allowed">
{isLoading ? "Sending..." : "Send"}
</button>
</form>
</div>
);
}

The UI above will look to operate as follows:

  • While the conversation is new, our input awaits a message from the user to:
    • Send the message to our OpenAI API inside our endpoint
    • Create a new context object that we can hold inside of Blob storage
    • Change the UI to acknowledge the AI will now be fed the context of the conversation for each new message
  • Once the above is completed, the AI will review the response and begin to answer, streaming message chunks as it responds
  • We take those chunks and render them into the chat window to completion

To support the flow above, let’s move on and build our serverless function.

#Building the serverless function

Our UI has set us up to need some pretty specific patterns from our serverless function. While building, we need to make sure that the function is aware of the different states it could be in based on which action has invoked the endpoint.

Let’s write the function and make sure it connects to the flow we’ve described.

netlify/functions/chat.ts
import type { Context } from "@netlify/functions";
import { getDeployStore } from "@netlify/blobs";
import OpenAI from "openai";
const CHAT_KEY = "current-chat";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
interface ChatMessage {
role: "user" | "assistant";
content: string;
}
export default async function(req: Request, context: Context) {
if (req.method !== "POST") {
return new Response("Method Not Allowed", { status: 405 });
}
try {
const { message, newConversation } = await req.json();
const store = getDeployStore("chat-history");
if (newConversation) {
await store.setJSON(CHAT_KEY, []);
return new Response(JSON.stringify({ success: true }));
}
if (!message) {
return new Response("Message is required", { status: 400 });
}
// Get history and update with user message
const history = (await store.get(CHAT_KEY, { type: "json" })) as ChatMessage[] || [];
const updatedHistory = [...history, { role: "user", content: message }];
// Stream the AI response
const stream = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: updatedHistory,
stream: true,
});
return new Response(
new ReadableStream({
async start(controller) {
// Track complete assistant response
let assistantMessage = '';
// Process each chunk from the AI stream
for await (const chunk of stream) {
const text = chunk.choices[0]?.delta?.content || "";
assistantMessage += text;
// Send chunk to client immediately for real-time display
controller.enqueue(new TextEncoder().encode(text));
}
// Save complete conversation history to blob storage
await store.setJSON(CHAT_KEY, [
...updatedHistory,
{ role: "assistant", content: assistantMessage }
]);
// Close stream after saving
controller.close();
},
}),
{
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Connection": "keep-alive",
},
}
);
} catch (error) {
console.error("Error:", error);
return new Response(JSON.stringify({ error: "Internal Server Error" }), {
status: 500,
});
}
}

With our serverless function handling the heavy lifting of streaming responses and managing the conversation state, we’ve created a great foundation for AI chat interactions.

#Final thoughts

We now have a fully context-driven chatbot in just a few files! We spun up a new Astro project, built our UI to handle streaming content from our serverless functions, and used Netlify Blob to store our chat history to feed into our OpenAI API. This has given our bot the ability to hold a conversation more complex than just a single response.

Some next steps to think about implementing could be:

  • Adding rate limiting to make sure our bot doesn’t get abused
  • Adding sessions & authentication to allow our user to revisit previous conversations
  • Creating a more robust UI for handling multiple chat sessions

These are just a few ideas to continue building on your foundational understanding of building with AI. Good luck and happy building!