liukidar
|
1 month ago
|
on: Show HN: Smooth CLI – Token-efficient browser for AI agents
Ahah, indeed that's true... That's why we've just released Smooth CLI (
https://docs.smooth.sh/cli/overview) and the SKILL.md (smooth-sdk/skills/smooth-browser/SKILL.md) associated with it. That should contain everything your agent needs to know to use Smooth. We will definitely add a LLM-friendly reference to it in the landing page and the docs introduction.
liukidar
|
1 year ago
|
on: Show HN: FastGraphRAG – Better RAG using good old PageRank
LLMs are only used to construct the graph, to navigate it we use an algorithmic approach. As of now, what we do is very similar to HippoRAG (
https://github.com/OSU-NLP-Group/HippoRAG), their paper can give a good overview on how things are working under the hood!
liukidar
|
1 year ago
|
on: Show HN: FastGraphRAG – Better RAG using good old PageRank
That would be awesome, we have a discord you can join and we can talk there (link is in the github repo, message Antonio)
or you can message antonio [at] circlemind.com
liukidar
|
1 year ago
|
on: Show HN: FastGraphRAG – Better RAG using good old PageRank
Thanks for sharing! These are all very helpful insights! We'll keep this in mind :)
liukidar
|
1 year ago
|
on: Show HN: FastGraphRAG – Better RAG using good old PageRank
We are building connectors for that, so it will soon :) At the moment we are using python-igraph (which does everything locally) as we wanted to offer something as ready to use as possible.
liukidar
|
1 year ago
|
on: Show HN: FastGraphRAG – Better RAG using good old PageRank
This is super interesting! Thanks for sharing. Here we are talking of graphs in the milions nodes/edges, so efficiency is not that big of a deal, since anyway things are gonna be parsed by a LLM to craft an asnwer which will always be the bottleneck. Indeed PageRank is the first step, but we would be happy to test more accurate alternatives. Importantly, we are using personalized pagerank here, meaning we give specific intial weights to a set (potentially quite large) of nodes, would TC support that (as well as giving weight to edges, since we are also looking into that)?
liukidar
|
1 year ago
|
on: Show HN: FastGraphRAG – Better RAG using good old PageRank
We have tried from small novels to full documentations of some milion tokens and both seem to create interesting graphs, it would be great to hear some feedback as more people start using it :)
liukidar
|
1 year ago
|
on: Show HN: FastGraphRAG – Better RAG using good old PageRank
liukidar
|
1 year ago
|
on: Show HN: FastGraphRAG – Better RAG using good old PageRank
It is to mark the package as private (in the sense that for normal usage you shouldn't need it). We are still writing the documentation on how to customize every little bit of the graph construction and querying pipeline, once that is ready we will expose the right tools (and files) for all of that :) For now just go with `from fast_graphrag import GraphRAG` and you should be good to go :)
liukidar
|
1 year ago
|
on: Show HN: FastGraphRAG – Better RAG using good old PageRank
The graph is currently stored using python-igraph. The codebase is designed such that it is easy to integrate any graphdb by writing a light wrapper around it (we will provide support to stuff like neo4j in the near future). We haven't tried triplex since we saw that gpt4o-mini is fast and precise enough for now (and we use it not only for extraction of entities and relationships, but also to get descriptions and resolve conflicts), but for sure with fine tuning results should improve.
The graph is queried by finding an initial set of nodes that are relevant to a given query and then running personalized pageranking from those nodes to find other relevant passages. Currently, we select the inital nodes with semantic search both on the whole query and entities extracted from it, but we are planning for other exciting additions to this method :)
liukidar
|
1 year ago
|
on: Show HN: FastGraphRAG – Better RAG using good old PageRank
Exactly! Also PageRank is used to navigate the graph and find "missing links" between the concepts selected from the query using semantic search via LLMs (so to be able to find information to answer questions that require multi-hop or complex reasoning in one go).