top | item 43172338

DeepSearcher: A local open-source Deep Research

229 points| stephen37 | 1 year ago |milvus.io

26 comments

order

gslepak|1 year ago

This doesn't seem to use local LLMs... so it's not really local. :-\

Is there a deep searcher that can also use local LLMs like those hosted by Ollama and LM Studio?

drdaeman|1 year ago

Looking at the code (https://github.com/zilliztech/deep-searcher/blob/master/deep...), I think it probably may work at least with Ollama without any additional tweaks if you run it with `OPENAI_BASE_URL=http://localhost:11434/v1` or define `provide_settings.llm.base_url` in `config.yaml` (https://github.com/zilliztech/deep-searcher/blob/6c77b1e5597...) and tweak the model appropriately.

From a quick glance, this project doesn't seem to use any tool/function calling or streaming or format enforcement or any other "fancy" API features, so all chances are that it may just work, although I have some reservations about the quality, especially with smaller models.

vineyardmike|1 year ago

I’m curious how this compares to the open-source version made by HuggingFace [1]. As I can tell, the HF version uses reasoning LLMs to search/traverse and parse the web and gather results, then evaluates the results before eventually synthesizing a result.

This version appears to show off a vector store for documents generated from a web crawl (the writer is a vector-store-aaS company)

[1] https://github.com/huggingface/smolagents/tree/main/examples...

stefanwebb|1 year ago

There's quite a few differences between HuggingFace's Open Deep-Research and Zilliz's DeepSearcher.

I think the biggest one is the goal: HF is to replicate the performance of Deep Research on the GAIA benchmark whereas ours is to teach agentic concepts and show how to build research agents with open-source.

Also, we go into the design in a lot more detail than HF's blog post. On the design side, HF uses code writing and execution as a tool, whereas we use prompt writing and calling as a tool. We do an explicit break down of the query into sub-queries, and sub-sub-queries, etc. whereas HF uses a chain of reasoning to decide what to do next.

I think ours is a better approach for producing a detailed report on an open-ended question, whereas HFs is better for answering a specific, challenging question in short form.

parhamn|1 year ago

I think the magic of Grok's implementation of this is that they already have most of the websites cached (guessing via their twitter crawler) so it all feels very snappy. Bing/Brave search don't seem to offer that in their search apis. Does such a thing exist as a service?

tekacs|1 year ago

I’ve been wondering about this and searching for solutions too.

For now we’ve just managed to optimize how quickly we download pages, but haven’t found an API that actually caches them. Perhaps companies are concerned that they’ll be sued for it in the age of LLMs?

The Brave API provides ‘additional snippets’, meaning that you at least get multiple slices of the page, but it’s not quite a substitute.

binarymax|1 year ago

Web search APIs can't present the full document due to copyright. They can only present the snippet contextual to the query.

I wrote my own implementation using various web search APIs and a puppeteer service to download individual documents as needed. It wasn't that hard but I do get blocked by some sites (reddit for example).

fuddle|1 year ago

Considering all the major AI companies have basically created the same deep research product, it would make sense that they focus on a shared open source platform instead.

Daniel_Van_Zant|1 year ago

Have been searching for a deep research tool that I can hook up to both my personal notes (in Obsidian) and the web and this looks like this has those capabilities. Now the only piece left is to figure out a way to export the deep research outputs back into my Obsidian somehow.

jianc1010|1 year ago

Sometimes I wanted to do a little coding to automate things with my personal productivity tool so i feel a programatic interface that open source implementation like this provides is very convenient

mtrovo|1 year ago

I'm wondering about the practical implications of integrating web crawling. Could this, in theory, be used solely for reading papers from Sci-Hub and producing valid graduate-level research?

It could be useful for comparing reports built using DeepSeek R1 vs. GPT-4o and other large models. The code being open source might highlight the limitations of different LLMs much faster and help develop better reasoning loops in future prompts for specific needs. Really interesting stuff.

namlem|1 year ago

The real magic bullet would be searching lib-gen and sci-hub as well

redskyluan|1 year ago

Amazing!

Search is not a problem . What to search is!

Using reasoning model, it is much easier to split task and focus on what to search

gnatnavi|1 year ago

+1. Asking the right questions is always the most difficult thing to do.

cma|1 year ago

Cloudflare is going to ruin self hosted things like this and force centralization to a few players. I guess we'll need decentralized efforts to scrape the web and be able to run it on that.