top | item 41567262

Rga: Ripgrep, but also search in PDFs, E-Books, Office documents, zip, etc.

516 points| bukacdan | 1 year ago |github.com

57 comments

order

staplung|1 year ago

Anyone know how it handles ligatures? Depending on font and tooling the word "fish" may end up in various docs as the glyphs [fi, s, h] or [f, i, s, h].

According to a quick check against /usr/share/dict/words "fi" occurs in about 1.5% of words and "fl" occurs in about 1%. There are other ligatures that sometimes occur but those are the most common in English I believe.

I don't have any sense of how common ligature usage is anymore (I notice that the word "Office" in the title of this article is not rendered with a ligature by Chrome) but it might be insanity inducing to end up on the wrong side of a failed search where ligatures were not normalized.

kranner|1 year ago

Seems to work well when it's searching the PDF text layer as ligatures are a font rendering effect. You're right — ligatures are not as common in modern books.

Might be iffier in OCR mode: it seems to use Tesseract, which is known to have issues recognising ligatured text.

shellac|1 year ago

The (standard) ripgrep regex engine has full unicode support. My reading of that is that it should handle such equivalences like matching the decomposed version.

virtualritz|1 year ago

> I notice that the word "Office" in the title of this article is not rendered with a ligature by Chrome

Chrome mobile on Android does render Office with what looks like at least an fi ligature for me (it should use an ffi one but still).

Maybe it depends on the font?

miki123211|1 year ago

> I don't have any sense of how common ligature usage is anymore

It's much more common in PDFs than it is on the web, at least when the underlying plaintext is concerned.

wanderingmind|1 year ago

Awesome tool and I use it often. One under utilized feature of rga is its integration with fuzzy search (fzf) that provides interactive outputs compared to running the commands and collecting outputs in sequence. So in short use rga-fzf instead of rga in CLI.

sim7c00|1 year ago

wish i knew about this workin support jobs sifting for logs and lines in zip 'support file' packages. very nice!

justinmayer|1 year ago

Integrating ripgrep-all with fzf makes for a powerful combination when you want to recursively search the contents of a given directory.

I have been using a shell function to do this, and it works wonderfully well: https://github.com/phiresky/ripgrep-all/wiki/fzf-Integration

The built-in rga-fzf command appeared in v0.10 and ostensibly obviates the need for the above shell function, but the built-in command produces errors for me on MacOS: https://github.com/phiresky/ripgrep-all/issues/240

bomewish|1 year ago

I also have a custom rga-fzf function, I think adapted from that:

rga-fzf () { local query=$1 local extension=$2 if [ -z "$extension" ] then RG_PREFIX="rga --files-with-matches --no-ignore" else RG_PREFIX="rga --files-with-matches --no-ignore --glob '*.$extension'" fi echo "RG Prefix: $RG_PREFIX" echo "Search Query: $query" FZF_DEFAULT_COMMAND="$RG_PREFIX '$query'" fzf --sort --preview="[[ ! -z {} ]] && rga --colors 'match:bg:yellow' --pretty --context 15 {q} {} | less -R" --multi --phony -q "$query" --color "hl:-1:underline,hl+:-1:underline:reverse" --bind "change:reload:$RG_PREFIX {q}" --preview-window="50%:wrap" --bind "enter:execute-silent(echo {} | xargs -n 1 open -g)" }

It allows one to continually open lots of files found through rga-fzf, so one can look at them in $EDITOR all at once. Useful sometimes.

Gehinnn|1 year ago

Love this for searching in movie subtitles!

rectang|1 year ago

To what extent does reading these formats accurately require the execution of code within the documents? In other words, not just stuff like zip expansion by a library dependency of rga, but for example macros inside office documents or JavaScript inside PDFs.

Note: I have no reason to believe such code execution is actually happening — so please don't take this as FUD. My assumption is that a secure design would involve running only external code and thus would sacrifice a small amount of accuracy, possibly negligible.

fwip|1 year ago

Also note that it's not necessarily safe to read these documents even if you don't intend on executing embedded code. For example, reading from pdfs uses poppler, which has had a few CVEs that could result in arbitrary code execution, mostly around image decoding. https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=poppler

(No shade to poppler intended, just the first tool on the list I looked at.)

traverseda|1 year ago

None of them really execute "code". Pandoc has a pretty good write up of the security implications or running it, which I think applies just as much to the other ones, with the added caveat of zip bombs.

https://pandoc.org/MANUAL.html#a-note-on-security

It's just text, this isn't ripgrepping through your excel macros, just the data that's actually in the excel file.

maxerickson|1 year ago

On average, the macros in an Office document add features to the software and aren't run to render any content. So like toggling a group of settings or inserting some content or whatever. They may change the content, but it's done at a point in time by the user, not each time the document is opened.

And then, on average, most users don't use macros in their documents.

So yes, negligible.

anthk|1 year ago

Use Recoll for that; check the recommended dependencies from your package manager. Synaptic it's good for this with a click from the right mouse button on the package.

EDIT: For instance, under Trisquel/Ubuntu/Debian and derivatives, click on 'recollcmd', and with the right click button mark all the dependencies.

Install RecollGUI for a nice UI.

Now you will have something like Google Search but libre in your own desktop.

hagbard_c|1 year ago

To take it further install recoll-webui [1] and SearxNG [2], enable the recoll engine in the latter at point it at the former for a web-accessible search engine for local as well as remote content. Make sure to put local content behind a password or other type of authentication unless you intend for it to be searchable by outside visitors.

Source: I made the recoll engine for Searx/SearxNG and have been using this system for many years now with a full-text index over close to a terabyte worth of data.

[1] https://github.com/koniu/recoll-webui

[2] https://github.com/searxng/searxng

gjadi|1 year ago

For emacs users in the room, there is consult-recoll.

nullifidian|1 year ago

It's somewhat similar to 'recoll' in its functionality, only with recoll you need to index everything before search. It even uses the same approach of using third-party software like poppler for extracting the contents.

medoc|1 year ago

By the way Recoll also has a utility named rclgrep which is an index-less search. It does everything that Recoll can do which can reasonably done without an index (e.g.: no proximity search, no stem expansion etc.). It will search all file types supported by Recoll, including embedded documents (email attachments, archive members, etc.). It is not built or distributed by default, because I think that building an index is a better approach, but it's in the source tar distribution and can be built with -Drclgrep=true. Disclosure: I am the Recoll developper.

rollcat|1 year ago

I think an index of all documents (including the contained text etc) should be a standardized component / API of every modern OS. Windows has had one since Vista (no idea about the API though), Spotlight has been a part of OS X for two decades, and there are various solutions for Linux & friends; however as far as I can tell there's no cross-platform wrapper that would make any or all of these easy to integrate with e.g. your IDE. That would be cool to have.

gcr|1 year ago

Does this work on Android? I’d love to put this on my eInk tablet so I could get actual search for my book library.

nsonha|1 year ago

Is there anything like this with vector an multimodal search as well? I know I'm asking too much.

wdkrnls|1 year ago

How does this compare with ugrep? I know that does many of these things while sticking with C++.

pdpi|1 year ago

Ugrep seems to be a completely new codebase, whereas RGA is a layer on top of ripgrep. Based on the benchmarks on the ripgrep github repo, rg is a bit better than 7x faster than ugrep.

jedisct1|1 year ago

Ever heard of ugrep?

seanthemon|1 year ago

Seems this project predates ugrep and has a nicer interface.

dcreater|1 year ago

Rip greps GitHub readme shows clearly superior performance to ugrep

burjui|1 year ago

C++ & autotools. No, thanks.

dcreater|1 year ago

What dependencies does it install and does it create a bunch of caches, indices that clog up storage and/or memory? (Besides rip grep)

lafrenierejm|1 year ago

> What dependencies does it install

For all of the built-in adapters to work, you'll need ffmpeg, pandoc, and poppler-utils. See the Scoop package [1] for a specific example of this.

> does it create a bunch of caches, that clog up storage and/or memory?

YMMV, but in my opinion ripgrep-all is pretty conservative in its caching. The cache files are all isolated to a single directory (whose location respects OS convention) and their contents are limited to plaintext that required processing to extract.

[1]: https://github.com/ScoopInstaller/Main/blob/master/bucket/rg...