top | item 45036944

Show HN: PageIndex – Vectorless RAG

192 points| page_index | 6 months ago |github.com

Not all improvements come from adding complexity — sometimes it's about removing it.

PageIndex takes a different approach to RAG. Instead of relying on vector databases or artificial chunking, it builds a hierarchical tree structure from documents and uses reasoning-based tree search to locate the most relevant sections. This mirrors how humans approach reading: navigating through sections and context rather than matching embeddings.

As a result, the retrieval feels transparent, structured, and explainable. It moves RAG away from approximate "semantic vibes" and toward explicit reasoning about where information lives. That clarity can help teams trust outputs and debug workflows more effectively.

The broader implication is that retrieval doesn't need to scale endlessly in vectors to be powerful. By leaning on document structure and reasoning, it reminds us that efficiency and human-like logic can be just as transformative as raw horsepower.

128 comments

order
[+] ineedasername|6 months ago|reply
>"Retrieval based on reasoning — say goodbye to approximate semantic search ("vibe retrieval"

How is this not precisely "vibe retrieval" and much more approximate, where approximate in this case is uncertainty over the precise reasoning?

Similarity with conversion to high-dimensional vectors and then something like kNN seems significantly less approximate, less "vibe" based, than this.

This also appears to be completely predicated on pre-enrichment of the documents by adding structure through API calls to, in the example, openAI.

It doesn't at all seem accurate to:

1: Toss out mathematical similarity calculations

2: Add structure with LLMs

3: Use LLMs to traverse the structure

4: Label this as less vibe-ish

Also for any sufficiently large set of documents, or granularity on smaller sets of documents, scaling will become problematic as the doc structure approaches the context limit of the LLM doing the retrieval.

[+] leetharris|6 months ago|reply
I work in this field, so I can answer.

Embeddings are great at basic conceptual similarity, but in quality maximalist fields and use cases they fall apart very quickly.

For example:

"I want you to find inconsistencies across N documents." There is no concept of an inconsistency in an embedding. However, a textual summary or context stuffing entire documents can help with this.

"What was John's opinion on the European economy in 2025?" It will find a similarity to things involving the European economy, including lots of docs from 2024, 2023, etc. And because of chunking strategies with embeddings and embeddings being heavily compressed representations of data, you will absolutely get chunks from various documents that are not limited to 2025.

"Where are Sarah or John directly quoted in this folder full of legal documents?" Sarah and John might be referenced across many documents, but finding where they are directly quoted is nearly impossible even in a high dimensional vector.

Embeddings are awesome, and great for some things like product catalog lookups and other fun stuff, but for many industries the mathematical cosign similarity approach is just not effective.

[+] jimmytucson|6 months ago|reply
It is just as "vibe-ish" as vector search and notably does require chunking (document chunks are fed to the indexer to build the table of contents). That said, I don't find vector search any less "vibey". While "mathematical similarity" is a structured operation, the "conversion to high-dimensional vectors" part is predicated on the encoder, which can be trained towards any objective.

    > scaling will become problematic as the doc structure approaches the context limit of the LLM doing the retrieval
IIUC, retrieval is based on traversing a tree structure, so only the root nodes have to fit in the context window. I find that kinda cool about this approach.

But yes, still "vibe retrieval".

[+] SV_BubbleTime|6 months ago|reply
> This also appears to be completely predicated on pre-enrichment of the documents by adding structure through API calls to, in the example, openAI.

That was my immediate take. [Look at the summary and answer based on where you expect the data to be found] maybe works well for reliably structured data.

[+] mosselman|6 months ago|reply
So if I understand this correctly it goes over every possible document with an LLM each time someone performs a search?

I might have misunderstood of course.

If so, then the use cases for this would be fairly limited since you'd have to deal with lots of latency and costs. In some cases (legal documents, medical records, etc) it might be worth it though.

An interesting alternative I've been meaning to try out is inverting this flow. Instead of using an LLM at time of searching to find relevant pieces to the query, you flip it around: at time of ingesting you let an LLM note all of the possible questions that you can answer with a given text and store those in an index. You could them use some traditional full-text search or other algorithms (BM25?) to search for relevant documents and pieces of text. You could even go for a hybrid approach with vectors on top or next to this. Maybe vectors first and then more ranking with something more traditional.

What appeals to me with that setup is low latency and good debug-ability of the results.

But as I said, maybe I've misunderstood the linked approach.

[+] Qwuke|6 months ago|reply
>An interesting alternative I've been meaning to try out is inverting this flow. Instead of using an LLM at time of searching to find relevant pieces to the query, you flip it around: at time of ingesting you let an LLM note all of the possible questions that you can answer with a given text and store those in an index.

You may already know of this one, but consider giving Google LangExtract a look. A lot of companies are doing what you described in production, too!

[+] agentcoops|6 months ago|reply
I’ve been working on RAG systems a lot this year and I think one thing people miss is that often for internal RAG efficiency/latency is not the main concern. You want predictable, linear pricing of course, but sometimes you want to simply be able to get a predictably better response by throwing a bit more money/compute time at it.

It’s really hard to get to such a place with standard vector-based systems, even GraphRag. Because it relies on summaries of topic clusters that are pre-computed, if one of those summaries is inaccurate or none of the summaries deal with your exact question, that will never change during query processing. Moreover, GraphRag preprocessing is insanely expensive and precisely does not scale linearly with your dataset.

TLDR all the trade-offs in RAG system design are still being explored, but in practice I’ve found the main desired property to be “predictably better answer with predictably scaling cost” and I can see how similar concerns got OP to this design.

[+] sdesol|6 months ago|reply
> An interesting alternative I've been meaning to try out is inverting this flow.

This is what I am doing with my AI Search Assistant feature, which I discuss in more detail via the link below:

https://github.com/gitsense/chat/blob/main/packages/chat/wid...

By default, I provide what I call a "Tiny Overview Analyzer". You can read the prompt for the Analyzer with the link below:

https://github.com/gitsense/chat/blob/main/packages/chat/wid...

In a nutshell, it generates a very short summary of every document along with keywords. The basic idea is to use BM25 ranking to identify the most relevant documents for the AI to review. For example, my use case is to understand how Aider, Claude Code, etc., store their conversations so that I can make them readable in my chat app. To answer this, I would ask 'How does Aider store conversations?' and the LLM would construct a deterministic keyword search using terms that would most likely identify how conversations are stored.

Once I have the list of files, the LLM is asked again to review the summaries of all matches and suggest which documents should be loaded in full for further review. I've found this approach to be inconsistent, however. What I've found to work much better is just loading the "Tiny Overview" summaries into context and chatting with the LLM. For example, I would ask the same question: "Which files do you think can tell me how Aider stores conversations? Identify up to 20 files and create a context bundle for them so I can load them into context." For a thousand files, you can easily fit three-sentence summaries for each of them without overwhelming the LLM. Once I have my answer, I just need a few clicks to load the files into context, and then the LLM will have full access to the file content and can better answer my question.

[+] rafaelmn|6 months ago|reply
I didn't look at the implementation but sounds similar to something I two years ago recursively summarize the documentation based on structure (domain/page/section) and then ask the model to walk the hierarchy based on summaries.

My motivation back then I had 8k context length to work with so I had to be very conservative about what I include. I still used vectors to narrow down the entry points and then use LLM to drill down or pick the most relevant ones and the search threads were separate, would summarize the response based on the tree path they took and then main thread would combine it.

[+] jdthedisciple|6 months ago|reply
> let an LLM note all of the possible questions that you can answer

What does this even mean? At what point do you know you have all of them?

Humans are quite ingenious coming up with new, unique questions in my observation, whereas LLMs have a hard time replicating those efficiently.

[+] CuriouslyC|6 months ago|reply
So, this has already been done plenty, Serena MCP and Codanna MCP both do this with AST source graphs, Codanna even gives hints in the MCP response to guide the agent to walk up/down the graph. There might be some small efficiency gain in having a separate agent walk the graph in terms of context savings, but you also lose solution fidelity, so I'm not sure it's a win. Also, it's not a replacement for RAG, it's just another piece in the pipeline that you merge over (rerank+cut or llm distillate).
[+] tomomomo|6 months ago|reply
Yeah, I agree it’s not something new, since humans also do this kind of retrieval. It’s just a way to generate a table of contents for an LLM. I’m wondering, when LLMs become stronger, will we still need vector-based retrieval? Or will we need a retrieval method that’s more like how humans do it?
[+] mikeve|6 months ago|reply
Not sure if I fully understand it, but this seems highly inefficient?

Instead of using embeddings which are easy to make a cheap to compare, you use summarized sections of documents and process them with an LLM? LLM's are slower and more expensive to run.

[+] falcor84|6 months ago|reply
If this is used as an important tool call for an AI agent that preforms many other calls, then it's likely that the added cost and latency would be negligible compared to the benefit of significantly improved retrieval. As an analogy, for a small task you're often ok with just going over the first few search results, but to prepare for a large project, you might want to spend an afternoon researching.
[+] CuriouslyC|6 months ago|reply
The idea this person is trying for is a LLM that explores the codebase using the source graph in the way a human might, by control+clicking in idea/vscode to go to definition, searching for usages of a function, etc. It actually does work, other systems use it as well, though they have the main agent performing the codebase walk rather than delegate to a "codebase walker" agent.
[+] mingtianzhang|6 months ago|reply
I think it only needs to generate the tree once before retrieval, and it doesn’t require any external model at query time. The indexing may take some time upfront, but retrieval is then very fast and cost-free.
[+] dcre|6 months ago|reply
My approach in "LLM-only RAG for small corpora" [0] was to mechanically make an outline version of all the documents _without_ an LLM, feed that to an LLM with the prompt to tell which docs are likely relevant, and then feed the entirety of those relevant docs to a second LLM call to answer the prompt. It only works with markdown and asciidoc files, but it's surprisingly solid for, for example, searching a local copy of the jj or helix docs. And if the corpus is small enough and your model is on the cheap side (like Gemini 2.5 Flash), you can of course skip the retrieval step and just send the entire thing every time.

[0]: https://crespo.business/posts/llm-only-rag/

[+] thatjoeoverthr|6 months ago|reply
There's good reasons to do this. Embedding similarity is _not_ a reliable method of determining relevance.

I did some measurements and found you can't even really tell if two documents are "similar" or not. Here: https://joecooper.me/blog/redundancy/

One common way is to mix approaches. e.g. take a large top-K from ANN on embeddings as a preliminary shortlist, then run a tuned LLM or cross encoder to evaluate relevance.

I'll link here these guys' paper which you might find fun: https://arxiv.org/pdf/2310.08319

At the end of the day you just want a way to shortlist and focus information that's cheaper, computationally, and more reliable, than dumping your entire corpus into a very large context window.

So what we're doing is fitting the technique to the situation. Price of RAM; GPU price; size of dataset; etc. The "ideal" setup will evolve as the cost structure and model quality evolves, and will always depend on your activity.

But for sure, ANN-on-embedding as your RAG pipeline is a very blunt instrument and if you can afford to do better you can usually think of a way.

[+] joshua_s_penman|6 months ago|reply
The thing is — for very long documents, it's actually pretty hard for humans to find things, even with a hierarchical structure. This is why we made indexes — the original indexes! — on paper. What you're saying makes pretty hard assumptions about document content, and of course doesn't start to touch multiple documents.

My feeling is that what you're getting at is actually the fact that it's hard to get semantic chunks and when embedding them, it's hard to have those chunks retain context/meaning, and then when retrieving, the cosine similarity of query/document is too vibes-y and not strictly logical.

These are all extremely real problems with the current paradigm of vector search. However, my belief is that one can fix each of these problems vs abandoning the fundamental technology. I think that we've only seen the first generation of vector search technology and there is a lot more to be built.

At Vectorsmith, we have some novel takes on both the comptuation and storage architecture for vector search. We have been working on this for the last 6 months and have seen some very promising resutls.

Fundamentally my belief is that the system is smarter when it mostly stays latent. All the steps of discretization that are implied in a search system like the above lose information in a way that likely hampers retrieval.

[+] zan2434|6 months ago|reply
interesting, so you think the issue with the above approach is the graph structure being too rigid / lossy (in terms of losing semantics)? And embeddings are also too lossy (in terms of losing context and structure)? But you guys are working on something less lossy for both semantics and context?
[+] mvieira38|6 months ago|reply
> It moves RAG away from approximate "semantic vibes" and toward explicit reasoning about where information lives. That clarity can help teams trust outputs and debug workflows more effectively.

Wasn't this a feature of RAGs, though? That they could match semantics instead of structure, while us mere balls of flesh need to rely on indexes. I'd be interested in benchmarks of this versus traditional vector-based RAGs, is something to that effect planned?

[+] brap|6 months ago|reply
Very cool. These days I’m building RAG over a large website, and when I look at the results being fed into the LLM, most of them are so silly it’s surprising the LLM even manages to extract something meaningful. Always makes me wonder if it’s just using prior knowledge even though it’s instructed not to do so (which is hacky).

I like your approach because it seems like a very natural search process, like a human would navigate a website to find information. I imagine the tradeoff is performance of both indexing and search, but for some use cases (like mine) it’s a good sacrifice to make.

I wonder if it’s useful to merge to two approaches. Like you could vectorize the nodes in the tree to give you a heuristic that guides the search. Could be useful in cases where information is hidden deep in a subtree, in a way that the document’s structure doesn’t give it away.

[+] mingtianzhang|6 months ago|reply
Strongly agree! It is basically the Mone-Carlo tree search method used in Alpha Go! This is also mentioned in one of their toturials: PageIndex/blob/main/tutorials/doc-search/semantics.md. I believe it will make the method more scalable for large documents.
[+] malshe|6 months ago|reply
The folks who are using RAG, what's the SOTA for extracting text from pdf documents? I have been following discussions on HN and I have seen a few promising solutions that involve converting pdf to png and then doing extraction. However, for my application this looks a bit risky because my pdfs have tons of tables and I can't afford to get in return incorrect of made up numbers.

The original documents are in HTML format and although I don't have access to them I can obtain them if I want. Is it better to just use these HTML documents instead? Previously I tried converting HTML to markdown and then use these for RAG. I wasn't too happy with the result although I fear I might be doing something wrong.

[+] huqedato|6 months ago|reply
I have a RAG built on 10000+ docs knowledge base. On vector store, of course (Qdrant - hybrid search). It work smoothly and quite reliable.

I wonder how this "vectorless" engine would deal with this. Simply, I can't see this tech scalable.

[+] lewisjoe|6 months ago|reply
This will scale when you have a single/a small set of document(s) and want your questions answered.

When you have a question and you don't know which of the million documents in your dataspace contains the answer - I'm not sure how this approach will perform. In that case we are looking at either feeding an enormously large tree as context to LLM or looping through potentially thousands of iterations between a tree & a LLM.

That said, this really is a good idea for a small search space (like a single document).

[+] gillesjacobs|6 months ago|reply
A suspicious lack of any performance metrics on the many standard RAG/QA benchmarks out there, except for their highly fine-tuned and dataset-specific MAFIN2.5 system. I would love the see this approach vs. a similarly well-tuned structured hybrid retriever (vector similarity + text matching) which is the common way of building domain-specific RAG. The FinanceBench GPT4o+Search system never mentions what the retrieval approach is [1,2], so I will have to assume it is the dumbest retriever possible to oversell the improvement.

PageIndex does not state to what degree the semantic structuring is rule-based (document structure) or also inferred by an ML model, in any case structuring chunks using semantic document structure is nothing new and pretty common, as is adding generated titles and summaries to the chunk nodes. But I find it dubious that prompt-based retrieval on structured chunk metadata works robustly, and if it does perform well it is because of the extra work in prompt-engineering done on chunk metadata generation and retrieval. This introduces two LLM-based components that can lead to highly variable output versus a traditional vector chunker and retriever. There are many more knobs to tune in a text prompt and an LLM-based chunker than in a sentence/paragraph chunker and a vector+text similarity hybrid retriever.

You will have to test retrieval and generation performance for your application regardless, but with so many LLM-based components this will lead to increased iteration time and cost vs. embeddings. Advantage of PageIndex is you can make it really domain-specific probably. Claims of improved retrieval time are dubious, vector databases (even with hybrid search) are highly efficient, definitely more efficient that prompting an LLM to select relevant nodes.

1. https://pageindex.ai/blog/Mafin2.5 2. https://github.com/VectifyAI/Mafin2.5-FinanceBench

[+] gogeta_99999|6 months ago|reply
>Instead of relying on vector databases or artificial chunking, it builds a hierarchical tree structure from documents and uses reasoning-based tree search to locate the most relevant sections.

So are we are creating create for each document on the fly ? even if its a batch process then dont you think we are pointing back to something which is graph (approximation vs latency sort of framework)

Looks like you are talking more in line of LLM driven outcome where "semantic" part is replaced with LLM intelligence.

I tried similar approaches few months back but those often results in poor scalablity, predictiablity and quality.

[+] visarga|6 months ago|reply
I did something like this myself. Take a large PDF, summarize each page. Make sure to have the titles of previous 3 pages, it helps with consistency and detecting transitions from one part to another. Then you take all page summaries in a list, and do another call to generate the table of contents. When you want to use it you add the TOC in the prompt and use a tool to retrieve sections on demand. This works better than embeddings which are blind to relations and larger context.

It was for a complex scenario of QA on long documents, like 200 page earning reports.

[+] dmezzetti|6 months ago|reply
Context and prompt engineering is the most important of AI, hands down.

There are plenty of lightweight retrieval options that don't require a separate vector database (I'm the author of txtai [https://github.com/neuml/txtai], which is one of them).

It can be as simple this in Python: you pass an index operation a data generator and save the index to a local folder. Then use that for RAG.

[+] neya|6 months ago|reply
This is good for applications where a background queue based RAG is acceptable. You upload a file, set the expectation to the user that you're processing it and needs more time for a few hours and then after X hours you deliver them. Great for manuals, documentation and larger content.

But for on-demand, near instant RAG (like say in a chat application), this won't work. Speed vs accuracy vs cost. Cost will be a really big one.

[+] actionfromafar|6 months ago|reply
If you have a lot of time, cost on a local machine may be low.