Build an article suggestion app with Supabase pgvector, and Portkey
Consider that you have list of support articles that you want to suggest it to users when users search for it. You want to suggest as best fit as possible. With the availability of tools like Large Language Model (LLMs) and Vector Databases, the approach towards suggestions & recommendation systems has significantly evolved.
In this article, we will go over and create a simple NodeJS application that stores the support articles (only titles, for simplicity) and perform vector similarity search thorough it’s embeddings and return the best article to the user.
A quick disclaimer:
This article is meant to give you a map that can help you get started and navigate the solutions against similar problem statements.
Please explore codebase on this Repl, if you are interested to start with code tinkering.
What makes vector similarity special?
Short answer: Embeddings.
The technique to translate a piece of content into vector representation is called embeddings. They allow you to analyze the semantic content mathematically.
The LLMs are capable of turning our content into vector representation, and embed them into the vector space, where similarity is concluded based on the distance between two embeddings. These embeddings are to be stored on vector databases.
In this article, we will use Supabase and enable pgvector to store vectors.
Overview of our app
Our app will utilize the Supabase vector database to maintain articles in the form of embeddings. Upon receiving a new query, the database will intelligently recommend the most relevant article.
This is how the process will work:
- The application will read a text file containing a list of article titles.
- It will then use OpenAI models through Portkey to convert the content into embeddings.
- These embeddings will be stored in pgvector, along with a function that enables similarity matching.
- When a user enters a new query, the application will return the most relevant article based on the similarity match database function.
Setup
Get going by setting up 3 things for this tutorial — NodeJS project, Portkey and Supabase.
Portkey
- Sign up and login into Portkey dashboard.
- your OpenAI API key and add it to Portkey Vault.
This will give you a unique identifier, virtual key, that you can reference in the code. More on this later on.
Supabase
Head to Supabase to create a New Project. Give it a name of your choice. I will label it “Product Wiki”. This step will provide access keys, such as the Project URL and API Key. Save them.
The project is ready!
We want to store embeddings in your database. To enable the database to store embeddings, you must enable Vector extension from Dashboard > Database > Extensions.
NodeJS
Navigate into any of your desired directories and run
See the project files created with package.json
. Since we want to store the list of articles to database, we have to read them for a file. Create articles.txt
and copy the following:
Open the index.js
and you are ready. Let’s start writing code.
Step 1: Importing and authenticating Portkey and Supabase
Since our app is set to interact with OpenAI (via Portkey) and Supabase pgvector database, let’s import the necessary SDK clients to run operations on them.
fs
to help us read the list of articles from a articles.txt
file and USER_QUERY
is the query we will use to do similarity search.
Step 2: Create a Table
We can use the SQL Editor to execute SQL queries. We will have one table for this project, and let’s call it support_articles
table. It will store the title of the article along with it’s embeddings. Please feel free to add more fields of your choice, such as description or tags.
For simplicity, create a table with columns for ID
, content
, and embedding
.
Execute the above SQL query in the SQL editor.
You can verify that the table has been created by navigating to Database > Tables > support_articles. A success message will appear in the Results tab once the execution is successful.
Step 3: Read, Generate and Store embeddings
We will use the fs
library to read the articles.txt
and convert every title on the list into embeddings. With Portkey, generating embeddings is straightforward and same as working with OpenAI SDK and no additional code changes required.
Similarly to store embeddings to Supabase:
To put everything together — reading from the file, generating embeddings, and storing them supabase.
That’s it! — All you need to write one line to store all the items to the pgvector database.
await storeSupportArticles();
You should now see the rows created from the Table Editor.
Step 4: Create a database function to query similar match
Next, let’s set up a database function to do vector similarity search using Supabase. This database function will take an user query vector as argument and return us an object with the id
, content
and the similarity
score against the best row and user query in the database.
Execute it in the SQL Editor similar to the table creation.
Congratulations, now our support_articles
is now powered to return vector similarity search operations.
No more waiting! Let’s run an search query.
Step 5: Query for the similarity match
The supabase client can make an remote procedure calls to invoke our vector similarity search function to find the nearest match to the user query.
The arguments will match the parameters we declared while creating the database function (in Step 4).
The console log
Afterthoughts
A single query with the best match for the user query mentioned above took 6 tokens and costed approximately $0.0001 cents. During the development of this app, I used up 2.4k tokens with a mean latency of 383ms.
You might be wondering how I know all of this? Well, it’s all thanks to the Portkey Dashboard.
This information is incredibly valuable, especially when used in real-time production. I encourage you to consider implementing search use cases in your ongoing projects such as recommendations, suggestions, and similar items.
Congratulations on making it this far! You now know how to work with embeddings in development and monitor your app in production.