API Creation - Part 1
If you have come to this page, configurations on successfully setting up the environment and Supabase setup. Let’s now start the actual work 👨🏻💻.
Supabase Connection Setup
Section titled “Supabase Connection Setup”Now that we have the basic backend server setup already done here, now we will now start to setup our supabase database connection with our server.
Install the
@supabase/supabase-jspackage.Terminal window npm install @supabase/supabase-jsOnce installed, go to your project root folder and create a file at
helpers/supabaseClient.js.// title=supabaseClient.jsimport { createClient } from '@supabase/supabase-js'import dotenv from "dotenv";//To access the api keys in .envdotenv.config()export const createSupabaseClient = () => {let supabaseUrl = process.env.SUPABASE_URLlet supabaseAnonKey = process.env.SUPABASE_ANON_KEYreturn createClient(supabaseUrl, supabaseAnonKey)}Here we are creating a function to connect our node server to supabaseclient using
createClientmethod from@supabase/supabase-js.
Setting up the store-document api route and service
Section titled “Setting up the store-document api route and service”Create 2 folders routes and services inside your server folder. routes will consist of all our api routes and services will consist of all the logic of our apis.
Inside
routes, create a filestoreDocumentRoutes.jswhere you will define the routestoreDocumentRoutes.js import express from "express";import { storeDocument } from "../services/storeDocumentService.js"; //This will be the actual logic we will be implementingconst router = express.Router();// Handle store document routerouter.post('/', async (req, res) => {try {const result = await storeDocument(req);res.status(200).json(result);} catch (error) {console.error("Error in storeDocument: ", error);res.status(500).json({error: "An error occurred during the request."})}});export default router;Now import the route in your
index.jsentry point for the server.index.js import express from "express";import cors from "cors";import storeDocumentRoute from "./routes/storeDocumentRoutes.js";const app = express();//Middleware to parse JSON request bodiesapp.use(express.json())//Configure and use CORS Middlewareconst corsOptions = {origin: "http://localhost:5173",methods: ["GET", "POST", "PUT", "DELETE"]allowedHeaders = ["Content-Type", "Authorization"]}app.use(cors(corsOptions))app.use("/store-document", storeDocumentRoute)app.listen('7004', () => {console.log('Server Running on PORT 7004');});export default app;Inside
services, create a filestoreDocumentService.jsstoreDocumentService.js export async function storeDocument(req){return {ok: true}}This is just for the testing purpose now. We will add the actual logic in the further stages.
Initialising Embeddings and Vector Store
Section titled “Initialising Embeddings and Vector Store”Install the GoogleGenAI Provider from Langchain:
Terminal window npm install @langchain/google-genaiIn our
storeDocumentService.jswe do the following additions:storeDocumentService.js import { createSupabaseClient } from '../helpers/supabaseClient.js';import { GoogleGenerativeAIEmbeddings } from '@langchain/google-genai'export async function storeDocument(req){try {//Initialising the Supabase Clientconst supabase = createSupabaseClient();//Generating Embeddings using the @langchain/google-genai packageconst embeddings = new GoogleGenerativeAIEmbeddings({model: "gemini-embedding-001", //The model you want to use to generate embeddingtaskType: TaskType.RETRIEVAL_DOCUMENT,title: "Youtube Rag"});} catch (error) {console.error(error);//Return false if there is any errorreturn {ok: false}}return {ok: true}}You can use any llm of your choice from the list of providers from LangChain.
Learn more about what are TaskTypes here.
Now let us initialise the vector store:
storeDocumentService.js import { SupabaseVectorStore } from '@langchain/community/vectorstores/supabase';const vectorStore = new SupabaseVectorStore(embeddings, {client = supabase,tableName: "embedded_documents",queryName: "match_documents"});Here we initialise a supabaseVectorstore to store the embeddings where we define the
client,tableNameandqueryName.
Access Youtube Video
Section titled “Access Youtube Video”We will be using the Youtube Loader from LangChain Loaders.
storeDocumentService.js import { YoutubeLoader } from '@langchain/community/document_loaders/web/youtube'//Get the youtube video url, from the userconst { url } = req.body;//Get the video data from url using YoutubeLoaderconst loader = await YoutubeLoader.createFromUrl(url, {addVideoInfo: true});//Load the dataconst docs = loader.load()//You can also print this to get a view of how the data is returned in response.console.log('Video Data: ', data);
Splitting the Docs into Chunks
Section titled “Splitting the Docs into Chunks”Now that we have our video data with us, now is the step to split the docs into small chunks. We will be using
@langchain/textsplittershere. You can install it using:Terminal window npm install @langchain/textsplitters- import { RecursiveCharacterTextSplitter } from '@langchain/textsplitters'const textSplitter = new RecursiveCharacterTextSplitter({chunkSize: 1000,chunkOverlap: 200,});const texts = await textSplitter.splitDocuments(docs);
Generating Document ID
Section titled “Generating Document ID”To generate a unique id we install a package:
Terminal window npm install uuid- import { v4 as uuidv4 } from "uuid";const documentId = uuidv4();//Check if it is getting createdconsole.log('Generted ID: ', documentId);const docsWithMetaData = texts.map((text) => ({...text,metadata: {...(text.metadata || {}),documentId}}))await vectorStore.addDocuments(docsWithMetaData)
With this we are able to generate a unique id for every entry and store the video along with the metadata, transcript and vector embeddings in the supabase database.
This is how your storeDocumentService.js will look like after the above step:
import { SupabaseVectorStore } from '@langchain/community/vectorstores/supabase'import { createSupabaseClient } from '../helpers/supabaseClient.js'import { GoogleGenerativeAIEmbeddings } from '@langchain/google-genai'import { YoutubeLoader } from '@langchain/community/document_loaders/web/youtube'import { RecursiveCharacterTextSplitter } from '@langchain/textsplitters'import { v4 as uuidv4 } from 'uuid'
export async function storeDocument(req) { try { if (!req?.body?.url) { throw new Error('URL is required in the request body') }
const { url } = req.body const supabase = createSupabaseClient()
const embeddings = new GoogleGenerativeAIEmbeddings({ model: 'embedding-001' // ✅ Safe default })
const vectorStore = new SupabaseVectorStore(embeddings, { client: supabase, tableName: 'embedded_documents', queryName: 'match_documents' })
// ✅ Await loader creation const loader = await YoutubeLoader.createFromUrl(url, { addVideoInfo: true })
const docs = await loader.load()
if (docs[0]) { docs[0].pageContent = `Video title: ${docs[0].metadata.title} | Video context: ${docs[0].pageContent}` }
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 })
const texts = await textSplitter.splitDocuments(docs)
if (!texts.length || !texts[0].pageContent) { throw new Error('Document has no content to embed.') }
const documentId = uuidv4() console.log('Generated DocumentID:', documentId) console.log('First chunk preview:', texts[0].pageContent.slice(0, 100))
const docsWithMetaData = texts.map((text) => ({ ...text, metadata: { ...(text.metadata || {}), documentId } }))
await vectorStore.addDocuments(docsWithMetaData) } catch (error) { console.error('❌ storeDocument Error:', error.message) }
return { ok: true }}⚙️ Next Steps
Section titled “⚙️ Next Steps”In the next section, we’ll:
- Create the conversation id and link the conversation documents in the database
- Start with
fetch-documentapi to fetch the data from the database based on the user query. - Create a complete LLM RAG Pipeline.
If you want to know more about this, do checkout our video :