It seems to me like chunking (or some higher order version of it like chunking into knowledge graphs) is the highest leverage thing someone can work on right now if trying to improve intelligence of AI systems like code completion, PDF understanding etc. I’m surprised more people aren’t working on this.
serjester|10 months ago
lmeyerov|10 months ago
We still want chunking in practice to avoid LLM confusion, undifferentiated embeddings, and handling large datasets at lower cost + large volumes. Large context means we can now tolerate multi-paragraph/page, so more like chunk by coherent section.
In theory we can do entire chapter/book, but those other concerns come in, so I only see more niche tools or talk-to-your-PDF do that.
At the same time, embedding is often a significant cost in above scenarios, so I'm curious about the semantic chunking overheads..
michaelmarkell|10 months ago
In the naive chunking approach, we would grab random sections of line items from these tables because they happen to reference some similar text to the search query, but there’s no guarantee the data pulled into context is complete.
DeveloperErrata|10 months ago
J_Shelby_J|10 months ago
J_Shelby_J|10 months ago
It splits an input text into equal sized chunks using DFS and parallelization (rayon) to do so relatively quickly.
However, the goal for me is to use a n LLM to split text by topic. I’m thinking I will implement it as an API saas service on top of it being OSS. Do you think it’s a viable business? You send a library of text, and receive a library of single topic context chunks as output.