Real-time semantic search only takes a little work, especially when the backend technologies are openly available on GitHub. This article supports that claim with an implementation using Deephaven and Weaviate.
Artificial intelligence and machine learning (AI/ML) platforms continue to advance at paces that are difficult to keep up with, especially for the average developer. Accessibility is vital to any software's success, especially in AI/ML, where ever-advancing tech needs to be usable by more than just the world's top minds. In that vein, Deephaven and Weaviate can combine to form the foundation for real-time semantic search. Both tools have APIs available in multiple languages, expressive syntax, and are open source. These all maximize accessibility in a field where many struggle to find it.
Deephaven and Weaviate are a powerful and accessible combination that can serve as the basis for the backend of semantic search engines and more.
Classical search engines rely on lexical search algorithms, which return exact matches based on user input. For instance, a lexical search for pasta
in a database of recipes will only return results that actually contain the word pasta
. Lexical search on a Deephaven table of recipes would look like this:
pasta_recipes = recipes.where(["Description.contains(`pasta`)"])
Unfortunately, when it comes to pasta, there are many different varieties. Not every pasta recipe will actually contain the word pasta
, and people searching for recipes tend to have specific tastes. What about a user that searches for cavatelli and roma tomatoes
? Lexical search falls short with very specific queries.
Semantic search is a more modern and innovative approach to this problem. With semantic search, machine learning algorithms denote the meaning of words and phrases to find matches based on proximity in a vector space. These vector spaces are created when natural language processing models vectorize words, phrases, and sentences into an N-dimensional state space. When a search term is given, such as pasta
, the input is vectorized into the same state space, and results are returned based on their proximity in the space to the vectorized search term. So, a semantic search for pasta
in a vector database of recipes will find and return recipes that include "nearby" words like lasagna
, macaroni
, gnocchi
, sauce
, and more.
Classical search algorithms have been studied for a very long time. They are well-understood, and sufficient for a significant number of search problems. However, like AI/ML, search engines need to keep up with technological advances.
The rest of this blog will use code in the Deephaven + Weaviate repository. It contains a simple example of Deephaven combined with Weaviate to search a vector database of books for relevant items based on an input search term.
Weaviate as a backend vector database
Weaviate is an open-source vector database that allows users to store data objects and vector embeddings from ML models to perform semantic, hybrid, and generative search on the data. When paired with Deephaven, it can perform semantic, hybrid, or generative search immediately, returning the results to Deephaven to be displayed to the user, all in real-time.
Deephaven as a search engine backend
Deephaven Community Core is an open database query engine that seamlessly handles real-time big data. If hooked up to a front-end user interface, it could easily handle the influx of search terms as they happen.
This example handles user input through an input table called book_search
. The second table, book_recommendations
, contains the top result found from the Weaviate semantic search.
from deephaven import dtypes as dht
from deephaven import input_table
coldefs = {"Concept": dht.string}
book_search = input_table(col_defs = coldefs)
def get_response(concept):
near_text = {"concepts": [concept]}
return (client.query.get("Book", ["title", "description", "language"]).with_near_text(near_text).with_limit(1).do())
def get_title(response) -> str:
return response["data"]["Get"]["Book"][0]["title"]
def get_description(response) -> str:
return response["data"]["Get"]["Book"][0]["description"]
book_recommendations = book_search.update(["Response = get_response(Concept)", "Title = get_title(Response)", "Description = get_description(Response)"]).drop_columns("Response")
All of the Python code, including what's above, can be found in the application mode folder.
A simple example
The Deephaven + Weaviate repository contains all of the code necessary to run Deephaven, connect to a Weaviate cluster, and perform semantic search of book title and description data. Upon connecting to the server, two tables should appear at the bottom of the user interface - if they don't, find them in the Panels
dropdown at the top right corner of the UI. The bottom left table, book_search
, is an input table in which you can manually enter search terms. For instance, searching for fantasy with a female lead
will return a related book in the bottom right table, book_recommendations
, which contains the title and description of the book:
Prerequisites
In order to run the application, you'll need:
- Deephaven's prerequisites
- A Weaviate Cluster and authentication token
- A Huggingface Inference API token
All of which are free.
Run the example
Once that's satisfied, clone the repository and set the required environment variables (see the README), and run the following command from your cloned repository:
docker compose up --build
The Docker logs will display a lot of information, including from the Python scripts that run on startup. The first time you run the program, it will download the entire Skelebor/book_titles_and_datasets dataset, which can take a bit of time. Keep an eye out for the following in the logs:
Server started on port 10000
When you see that, you can connect to the Deephaven server. Head to http://localhost:10000/ide/
, enter your key (DH_PSK
), and you're in. You should see two empty tables in the bottom half of the screen. You can also open them by clicking on the Panels
dropdown in the top right of the UI. Enter search terms in the book_genres
table, click Commit
, and watch the results flow into the book_recommendations
table.
Take it further
The example was put together using Weaviate's free sandbox tier. It's rate-limited, which means you can't upload the entire dataset. You could take this example a step further by upgrading tiers, uploading more data, and getting more relevant results.
If you choose to do that, you could also enable more search terms from the Deephaven side of things by using a string array column (instead of a string column), to handle more than one search term at a time. Additionally, there are other ways to ingest real-time data into Deephaven, such as a Kafka stream, which is more suitable and scalable for real-world applications.
The example use case is basic but powerful, and put together without a ton of code or fancy scaffolding. It's amazing what can be accomplished with freely available technology.
Reach out
Tell us about your own use cases or let us know if you tried our example. Join us on Slack!