Streaming UI
Now that we have our data ingested into ChromaDB, we want to create a streaming UI to get the response from the model.
Creating the Citations and TypingIndicator Components
We will start with creating 2 components that are important for our project, Citations and TypingIndicator. These components will be used to display the citations and typing indicator in the UI.
"use client";
import { useState } from "react";
interface Citation {
id: number;
source: string;
chunkIndex: number;
preview: string;
}
export default function Citations({ citations }: { citations: Citation[] }) {
const [expanded, setExpanded] = useState(false);
if (!citations || citations.length === 0) return null;
return (
<div className="mt-3 space-y-2">
<button
onClick={() => setExpanded(!expanded)}
className="flex items-center gap-2 text-sm text-white/60 hover:text-white/90 transition-colors"
>
<svg
className={`w-4 h-4 transition-transform ${expanded ? "rotate-90" : ""}`}
fill="none"
stroke="currentColor"
viewBox="0 0 24 24"
>
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M9 5l7 7-7 7" />
</svg>
<span>{citations.length} source{citations.length > 1 ? "s" : ""}</span>
</button>
{expanded && (
<div className="space-y-2 animate-fadeIn">
{citations.map((citation) => (
<div
key={citation.id}
className="p-3 rounded-lg bg-white/5 border border-white/10 hover:border-white/20 transition-colors"
>
<div className="flex items-start gap-2">
<span className="text-xs font-mono text-blue-400 mt-0.5">
[{citation.id}]
</span>
<div className="flex-1 min-w-0">
<div className="text-sm font-medium text-white/90 truncate" title={citation.source}>
{citation.source.split('/').pop() || citation.source}
</div>
<div className="text-xs text-white/50 mt-1">
Chunk {citation.chunkIndex}
</div>
<div className="text-xs text-white/40 mt-2 line-clamp-2">
{citation.preview}
</div>
</div>
</div>
</div>
))}
</div>
)}
</div>
);
}"use client";
export default function TypingIndicator() {
return (
<div className="flex gap-1 items-center px-4 py-3">
<div className="w-2 h-2 bg-white/40 rounded-full animate-bounce [animation-delay:-0.3s]"></div>
<div className="w-2 h-2 bg-white/40 rounded-full animate-bounce [animation-delay:-0.15s]"></div>
<div className="w-2 h-2 bg-white/40 rounded-full animate-bounce"></div>
</div>
);
}
Update the global.css file
@import "tailwindcss";
:root {
--background: #ffffff;
--foreground: #171717;
}
@theme inline {
--color-background: var(--background);
--color-foreground: var(--foreground);
--font-sans: var(--font-geist-sans);
--font-mono: var(--font-geist-mono);
}
@media (prefers-color-scheme: dark) {
:root {
--background: #0a0a0a;
--foreground: #ededed;
}
}
body {
background: var(--background);
color: var(--foreground);
font-family: Arial, Helvetica, sans-serif;
}
/* Animations */
@keyframes fadeIn {
from {
opacity: 0;
transform: translateY(10px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
.animate-fadeIn {
animation: fadeIn 0.3s ease-out forwards;
}
/* Custom Scrollbar */
::-webkit-scrollbar {
width: 8px;
height: 8px;
}
::-webkit-scrollbar-track {
background: rgba(255, 255, 255, 0.05);
}
::-webkit-scrollbar-thumb {
background: rgba(255, 255, 255, 0.2);
border-radius: 4px;
}
::-webkit-scrollbar-thumb:hover {
background: rgba(255, 255, 255, 0.3);
}
/* Markdown Prose Styling */
.prose {
color: inherit;
}
.prose code {
font-size: 0.875em;
}
.prose pre {
margin: 0.5rem 0;
padding: 0;
background: transparent;
}
.prose h1, .prose h2, .prose h3, .prose h4, .prose h5, .prose h6 {
color: rgba(255, 255, 255, 0.95);
font-weight: 600;
margin-top: 1em;
margin-bottom: 0.5em;
}
.prose a {
color: rgb(96, 165, 250);
text-decoration: underline;
}
.prose a:hover {
color: rgb(147, 197, 253);
}
/* Line clamp utility */
.line-clamp-2 {
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
overflow: hidden;
}
Create the post api route for chat
Now, whenever a user sends a query, we will make a POST api call to the ingest api url with the query as the body and then display the response in the UI.
import { getOrCreateCollection } from "@/lib/chromaClient";
import { google } from "@ai-sdk/google";
import { convertToModelMessages, generateText, streamText, type UIMessage } from 'ai';
import { NextRequest } from "next/server";
export const runtime = "nodejs";
export const maxDuration = 40; //Allow streaming for upto 40s
function extractUserMessageText(msg: any): string {
if (Array.isArray(msg.parts)) {
return msg.parts
.map((p: any) => (p.type === "text" ? p.text : ""))
.join("\n")
.trim();
}
return "";
}
const COLLECTION_NAME = "secondbrain";
export async function POST(req: NextRequest) {
const { messages } = (await req.json() as { messages: UIMessage[] });
if (!messages || !messages?.length) {
return new Response("Missing messages", { status: 400 });
};
// Get last user message as query
const lastUserIndex = [...messages].reverse().findIndex((m) => m?.role === "user");
if (lastUserIndex === -1) {
return new Response("No user message found", { status: 400 });
}
const realIndex = messages?.length - 1 - lastUserIndex;
const lastUserMessage = messages[realIndex];
const query = extractUserMessageText(lastUserMessage);
if (!query?.trim()) {
return new Response("Empty user query", { status: 400 });
}
// RAG: Retrieve Top-k chunks from Chroma
const collection = await getOrCreateCollection(COLLECTION_NAME);
const ragResults = await collection?.query({
queryTexts: [query],
nResults: 5,
include: ["documents", "metadatas"],
});
const docs = (ragResults?.documents?.[0] ?? []) as string[];
const metas = (ragResults?.metadatas?.[0] ?? []) as Record<string, any>[];
// Build a context block
const context = docs
?.map((doc, i) => {
const meta = metas[i] || {};
const source = meta?.filePath ?? meta?.path ?? "unknown";
const chunkIndex = meta?.chunkIndex ?? i;
return `Source ${i + 1} (file: ${source}, chunk: ${chunkIndex}):\n${doc}`;
}).join("\n\n");
const systemPrompt = `
You are Adi's personal Second Brain assistant.
Use ONLY the information provided in the "Context" section below when answering.
If the answer is not clearly contained in the context, say:
"I don't have that in my Second Brain yet."
When you answer:
- Be concise but clear.
- Prefer bullet points where it helps.
- If relevant, mention which source(s) you used.
`.trim();
const augmentedLastUser: UIMessage = {
id: lastUserMessage?.id,
role: "user",
parts: [
{
type: "text",
text: `${query}
---
Context from Adi's Second Brain:
${context || "[no matching context found]"}
Now answer the user's question strictly based on the context above.
`
}
],
}
// Build model messages: system + previous history + augmented last user
const uiMessagesWithContext: UIMessage[] = [
{
id: "system-1",
role: "system",
parts: [{
type: "text",
text: systemPrompt
}],
},
...messages?.slice(0, realIndex),
augmentedLastUser,
];
// Stream response
const result = streamText({
model: google("gemini-1.5-flash"),
messages: convertToModelMessages(uiMessagesWithContext),
});
// Return streaming response compatible with useChat
return result?.toUIMessageStreamResponse();
}This route handles the core RAG (Retrieval-Augmented Generation) logic:
- Message Extraction: It identifies the latest user query from the message history.
- Vector Search: It queries the ChromaDB collection to find the top 5 most relevant document chunks based on the user's query.
- Context Augmentation: It builds a context block containing the retrieved text and source metadata (file paths).
- System Prompting: It defines strict instructions for the AI to only use the provided context and handle missing information gracefully.
- Streaming: It uses the AI SDK's
streamTextwith Google Gemini to provide a real-time typing experience in the UI.
Create the chatUI Page
Now that we have the github action ready, we want to create a chatUI page to display the response from the ingest api. Create a new file app/page.tsx and add the following code:
"use client";
import { UIMessage, useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
import { useState, useRef, useEffect } from "react";
import Markdown from "react-markdown";
import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter';
import { vscDarkPlus } from 'react-syntax-highlighter/dist/esm/styles/prism';
import TypingIndicator from "./components/TypingIndicator";
export default function ChatPage() {
const [input, setInput] = useState("");
const { messages, sendMessage, status } = useChat({
transport: new DefaultChatTransport({
api: '/api/chat'
}),
});
const bottomRef = useRef<HTMLDivElement>(null);
useEffect(() => {
bottomRef.current?.scrollIntoView({ behavior: "smooth" });
}, [messages]);
const onSubmit = async (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim()) return;
await sendMessage({ text: input });
setInput("");
};
const isLoading = status === "submitted" || status === "streaming";
return (
<div className="min-h-screen flex flex-col bg-gradient-to-br from-gray-900 via-black to-gray-900 text-white">
<header className="px-6 py-4 border-b border-white/10 backdrop-blur-sm bg-black/20 sticky top-0 z-10">
<div className="max-w-4xl mx-auto flex items-center justify-between">
<h1 className="text-xl font-semibold flex items-center gap-2">
<span className="text-2xl">🧠</span>
Second Brain
</h1>
<div className="text-sm text-white/50">
{messages.length > 0 && `${messages.length} message${messages.length > 1 ? 's' : ''}`}
</div>
</div>
</header>
<main className="flex-1 overflow-y-auto p-6">
<div className="max-w-4xl mx-auto space-y-6">
{messages.length === 0 && (
<div className="flex flex-col items-center justify-center min-h-[60vh] text-center animate-fadeIn">
<div className="text-6xl mb-6">💡</div>
<h2 className="text-3xl font-bold mb-3 bg-gradient-to-r from-blue-400 to-purple-400 bg-clip-text text-transparent">
Welcome to Your Second Brain
</h2>
<p className="text-white/60 max-w-md mb-8">
Ask me anything about your knowledge base. I'll search through your notes and provide answers with source citations.
</p>
<div className="grid grid-cols-1 md:grid-cols-2 gap-3 w-full max-w-2xl">
{[
"What are my recent notes about?",
"Summarize my thoughts on...",
"Find information about...",
"What do I know about..."
].map((suggestion, i) => (
<button
key={i}
onClick={() => setInput(suggestion)}
className="px-4 py-3 rounded-xl bg-white/5 hover:bg-white/10 border border-white/10 hover:border-white/20 transition-all text-sm text-left"
>
{suggestion}
</button>
))}
</div>
</div>
)}
{messages.map((m, idx) => (
<div
key={m.id}
className={`flex ${m.role === "user" ? "justify-end" : "justify-start"} animate-fadeIn`}
style={{ animationDelay: `${idx * 0.05}s` }}
>
<div className={`max-w-[85%] ${m.role === "user" ? "ml-auto" : "mr-auto"}`}>
<div className={`px-5 py-3 rounded-2xl ${m.role === "user"
? "bg-gradient-to-r from-blue-600 to-blue-500 text-white shadow-lg shadow-blue-500/20"
: "bg-white/5 backdrop-blur-sm text-white border border-white/10"
}`}>
{m.parts.map((p, i) => (
p.type === "text" ? (
<div key={i} className="prose prose-invert prose-sm max-w-none">
<Markdown
components={{
code({ node, inline, className, children, ...props }: any) {
const match = /language-(w+)/.exec(className || '');
return !inline && match ? (
<SyntaxHighlighter
style={vscDarkPlus}
language={match[1]}
PreTag="div"
className="rounded-lg my-2"
{...props}
>
{String(children).replace(/
$/, '')}
</SyntaxHighlighter>
) : (
<code className="bg-white/10 px-1.5 py-0.5 rounded text-sm" {...props}>
{children}
</code>
);
},
p: ({ children }) => <p className="mb-2 last:mb-0">{children}</p>,
ul: ({ children }) => <ul className="list-disc list-inside mb-2 space-y-1">{children}</ul>,
ol: ({ children }) => <ol className="list-decimal list-inside mb-2 space-y-1">{children}</ol>,
li: ({ children }) => <li className="text-white/90">{children}</li>,
strong: ({ children }) => <strong className="font-semibold text-white">{children}</strong>,
a: ({ children, href }) => (
<a href={href} className="text-blue-400 hover:text-blue-300 underline" target="_blank" rel="noopener noreferrer">
{children}
</a>
),
}}
>
{p.text}
</Markdown>
</div>
) : null
))}
</div>
</div>
</div>
))}
{isLoading && (
<div className="flex justify-start animate-fadeIn">
<div className="bg-white/5 backdrop-blur-sm rounded-2xl border border-white/10">
<TypingIndicator />
</div>
</div>
)}
<div ref={bottomRef} />
</div>
</main>
<form onSubmit={onSubmit} className="p-4 border-t border-white/10 bg-black/40 backdrop-blur-md sticky bottom-0">
<div className="max-w-4xl mx-auto flex gap-3">
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Ask your Second Brain anything..."
disabled={isLoading}
className="flex-1 px-5 py-3 rounded-xl bg-white/10 border border-white/20 focus:outline-none focus:ring-2 focus:ring-blue-500 focus:border-transparent placeholder:text-white/40 disabled:opacity-50 disabled:cursor-not-allowed transition-all"
/>
<button
type="submit"
disabled={isLoading || !input.trim()}
className="px-6 py-3 rounded-xl bg-gradient-to-r from-blue-600 to-blue-500 hover:from-blue-500 hover:to-blue-400 transition-all disabled:opacity-50 disabled:cursor-not-allowed font-medium shadow-lg shadow-blue-500/20 hover:shadow-blue-500/40"
>
{isLoading ? (
<span className="flex items-center gap-2">
<svg className="animate-spin h-4 w-4" viewBox="0 0 24 24">
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4" fill="none" />
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z" />
</svg>
Thinking...
</span>
) : (
"Send"
)}
</button>
</div>
</form>
</div>
);
}
Key implementation details for the Chat UI:
- Vercel AI SDK (
useChat): This hook handles the heavy lifting of chat state management. It tracksmessages, handles theinputfield, and automatically processes the streaming response from the server. - DefaultChatTransport: We use this to explicitly point the
useChathook to our/api/chatendpoint. This provides a clean interface for network communication between the frontend and the AI route. - Rich Text Rendering: The UI uses
react-markdownandreact-syntax-highlighterto render the AI's response. This ensures that code snippets, bold text, and lists are displayed with professional formatting. - Auto-Scrolling: A combination of
useRefanduseEffectensures that as new message chunks arrive, the window automatically scrolls to keep the latest content in view.
Next Steps
In the next section, we’ll:
Create Sessions
Create sessions to store the context of the conversation
Store Messages and Sessions in MongoDB
Store messages and sessions in MongoDB
If you want to know more about this, do checkout our video guide:
