Once you create the knowledge graph, you will have to store it so it can be accessed and used for requests. At this point, you have two options – to use a dedicated graph database to store the whole graph, or add the knowledge graph to your existing database.
While it may seem intuitive to use a graph database to store your knowledge graph, it isn’t actually necessary. Running on a full graph database is worthwhile if you are planning to run full graph queries uk whatsapp number data using the likes of Gremlin or Cypher. However, graph databases are designed for more complex queries searching for paths with specific sequences of properties, i.e., graph analytics. That overhead is simply overkill for retrieving sub-knowledge graph results in these circumstances, and it opens the door for a host of other problems, such as queries that go off the rails in terms of performance.
Retrieving the sub-knowledge graph around a few nodes is a simple graph traversal, so you may not need the full capabilities of a dedicated graph database. When traversals are often only to a depth of two or three hops, any additional information is not likely to be relevant to the specific vector search query in any case. This means that your requests will normally be expressed as a few rounds of simple queries (one for each step) or a SQL join. In effect, the simpler you can keep your queries, the better the quality of the results that you can then provide to your LLM.